969 resultados para Computational models


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The complex relationship between structural and functional connectivity, as measured by noninvasive imaging of the human brain, poses many unresolved challenges and open questions. Here, we apply analytic measures of network communication to the structural connectivity of the human brain and explore the capacity of these measures to predict resting-state functional connectivity across three independently acquired datasets. We focus on the layout of shortest paths across the network and on two communication measures-search information and path transitivity-which account for how these paths are embedded in the rest of the network. Search information is an existing measure of information needed to access or trace shortest paths; we introduce path transitivity to measure the density of local detours along the shortest path. We find that both search information and path transitivity predict the strength of functional connectivity among both connected and unconnected node pairs. They do so at levels that match or significantly exceed path length measures, Euclidean distance, as well as computational models of neural dynamics. This capacity suggests that dynamic couplings due to interactions among neural elements in brain networks are substantially influenced by the broader network context adjacent to the shortest communication pathways.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The capacity to interact socially and share information underlies the success of many animal species, humans included. Researchers of many fields have emphasized the evo¬lutionary significance of how patterns of connections between individuals, or the social networks, and learning abilities affect the information obtained by animal societies. To date, studies have focused on the dynamics either of social networks, or of the spread of information. The present work aims to study them together. We make use of mathematical and computational models to study the dynamics of networks, where social learning and information sharing affect the structure of the population the individuals belong to. The number and strength of the relationships between individuals, in turn, impact the accessibility and the diffusion of the shared information. Moreover, we inves¬tigate how different strategies in the evaluation and choice of interacting partners impact the processes of knowledge acquisition and social structure rearrangement. First, we look at how different evaluations of social interactions affect the availability of the information and the network topology. We compare a first case, where individuals evaluate social exchanges by the amount of information that can be shared by the partner, with a second case, where they evaluate interactions by considering their partners' social status. We show that, even if both strategies take into account the knowledge endowments of the partners, they have very different effects on the system. In particular, we find that the first case generally enables individuals to accumulate higher amounts of information, thanks to the more efficient patterns of social connections they are able to build. Then, we study the effects that homophily, or the tendency to interact with similar partners, has on knowledge accumulation and social structure. We compare the case where individuals who know the same information are more likely to learn socially from each other, to the opposite case, where individuals who know different information are instead more likely to learn socially from each other. We find that it is not trivial to claim which strategy is better than the other. Depending on the possibility of forgetting information, the way new social partners can be chosen, and the population size, we delineate the conditions for which each strategy allows accumulating more information, or in a faster way For these conditions, we also discuss the topological characteristics of the resulting social structure, relating them to the information dynamics outcome. In conclusion, this work paves the road for modeling the joint dynamics of the spread of information among individuals and their social interactions. It also provides a formal framework to study jointly the effects of different strategies in the choice of partners on social structure, and how they favor the accumulation of knowledge in the population. - La capacité d'interagir socialement et de partager des informations est à la base de la réussite de nombreuses espèces animales, y compris les humains. Les chercheurs de nombreux domaines ont souligné l'importance évolutive de la façon dont les modes de connexions entre individus, ou réseaux sociaux et les capacités d'apprentissage affectent les informations obtenues par les sociétés animales. À ce jour, les études se sont concentrées sur la dynamique soit des réseaux sociaux, soit de la diffusion de l'information. Le présent travail a pour but de les étudier ensemble. Nous utilisons des modèles mathématiques et informatiques pour étudier la dynamique des réseaux, où l'apprentissage social et le partage d'information affectent la structure de la population à laquelle les individus appartiennent. Le nombre et la solidité des relations entre les individus ont à leurs tours un impact sur l'accessibilité et la diffusion de l'informa¬tion partagée. Par ailleurs, nous étudions comment les différentes stratégies d'évaluation et de choix des partenaires d'interaction ont une incidence sur les processus d'acquisition des connaissances ainsi que le réarrangement de la structure sociale. Tout d'abord, nous examinons comment des évaluations différentes des interactions sociales influent sur la disponibilité de l'information ainsi que sur la topologie du réseau. Nous comparons un premier cas, où les individus évaluent les échanges sociaux par la quantité d'information qui peut être partagée par le partenaire, avec un second cas, où ils évaluent les interactions en tenant compte du statut social de leurs partenaires. Nous montrons que, même si les deux stratégies prennent en compte le montant de connaissances des partenaires, elles ont des effets très différents sur le système. En particulier, nous constatons que le premier cas permet généralement aux individus d'accumuler de plus grandes quantités d'information, grâce à des modèles de connexions sociales plus efficaces qu'ils sont capables de construire. Ensuite, nous étudions les effets que l'homophilie, ou la tendance à interagir avec des partenaires similaires, a sur l'accumulation des connaissances et la structure sociale. Nous comparons le cas où des personnes qui connaissent les mêmes informations sont plus sus¬ceptibles d'apprendre socialement l'une de l'autre, au cas où les individus qui connaissent des informations différentes sont au contraire plus susceptibles d'apprendre socialement l'un de l'autre. Nous constatons qu'il n'est pas trivial de déterminer quelle stratégie est meilleure que l'autre. En fonction de la possibilité d'oublier l'information, la façon dont les nouveaux partenaires sociaux peuvent être choisis, et la taille de la population, nous déterminons les conditions pour lesquelles chaque stratégie permet d'accumuler plus d'in¬formations, ou d'une manière plus rapide. Pour ces conditions, nous discutons également les caractéristiques topologiques de la structure sociale qui en résulte, les reliant au résultat de la dynamique de l'information. En conclusion, ce travail ouvre la route pour la modélisation de la dynamique conjointe de la diffusion de l'information entre les individus et leurs interactions sociales. Il fournit également un cadre formel pour étudier conjointement les effets de différentes stratégies de choix des partenaires sur la structure sociale et comment elles favorisent l'accumulation de connaissances dans la population.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cooperation is ubiquitous in nature: genes cooperate in genomes, cells in muti- cellular organims, and individuals in societies. In humans, division of labor and trade are key elements of most known societies, where social life is regulated by- moral systems specifying rights and duties often enforced by third party punish¬ment. Over the last decades, several primary mechanisms, such as kin selection, direct and indirect reciprocity, have been advanced to explain the evolution of cooperation from a naturalistic approach. In this thesis, I focus on the study of three secondary mechanisms which, although insufficient to allow for the evo¬lution of cooperation, have been hypothesized to further promote it when they are linked to proper primary mechanisms: conformity (the tendency to imitate common behaviors), upstream reciprocity (the tendency to help somebody once help has been received from somebody else) and social diversity (heterogeneous social contexts). I make use of mathematical and computational models in the formal framework of evolutionary game theory in order to investigate the theoret¬ical conditions under which conformity, upstream reciprocity and social diversity are able to raise the levels of cooperation attained in evolving populations. - La coopération est ubiquitaire dans la nature: les gènes coopèrent dans les génomes, les cellules dans les organismes muticellulaires, et les organismes dans les sociétés. Chez les humains, la division du travail et le commerce sont des éléments centraux de la plupart des sociétés connues, où la vie sociale est régie par des systèmes moraux établissant des droits et des devoirs, souvent renforcés par la punition. Au cours des dernières décennies, plusieurs mécanismes pri¬maires, tels que la sélection de parentèle et les réciprocités directe et indirecte, ont été avancés pour expliquer l'évolution de la coopération d'un point de vue nat¬uraliste. Dans cette thèse, nous nous concentrons sur l'étude de trois mécanismes secondaires qui, bien qu'insuffisants pour permettre l'évolution de la coopération, sont capables de la promouvoir davantage s'ils sont liés aux mécanismes primaires appropriés: la conformité (tendance à imiter des comportements en commun), la 'réciprocité en amont' (tendance à aider quelqu'un après avoir reçu l'aide de quelqu'un d'autre) et la diversité sociale (contextes sociaux hétérogènes). Nous faisons usage de modèles mathématiques et informatiques dans le cadre formel de la théorie des jeux évolutionnaires afin d'examiner les conditions théoriques dans lesquelles la conformité, la 'réciprocité en amont' et la diversité sociale sont capables d'élever le niveau de coopération des populations en évolution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The human motion study, which relies on mathematical and computational models ingeneral, and multibody dynamic biomechanical models in particular, has become asubject of many recent researches. The human body model can be applied to different physical exercises and many important results such as muscle forces, which are difficult to be measured through practical experiments, can be obtained easily. In the work, human skeletal lower limb model consisting of three bodies in build using the flexible multibody dynamics simulation approach. The floating frame of reference formulation is used to account for the flexibility in the bones of the human lower limb model. The main reason of considering the flexibility inthe human bones is to measure the strains in the bone result from different physical exercises. It has been perceived the bone under strain will become stronger in order to cope with the exercise. On the other hand, the bone strength is considered and important factors in reducing the bone fractures. The simulation approach and model developed in this work are used to measure the bone strain results from applying raising the sole of the foot exercise. The simulation results are compared to the results available in literature. The comparison shows goof agreement. This study sheds the light on the importance of using the flexible multibody dynamic simulation approach to build human biomechanical models, which can be used in developing some exercises to achieve the optimalbone strength.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The marine environment is certainly one of the most complex systems to study, not only because of the challenges posed by the nature of the waters, but especially due to the interactions of physical, chemical and biological processes that control the cycles of the elements. Together with analytical chemists, oceanographers have been making a great effort in the advancement of knowledge of the distribution patterns of trace elements and processes that determine their biogeochemical cycles and influences on the climate of the planet. The international academic community is now in prime position to perform the first study on a global scale for observation of trace elements and their isotopes in the marine environment (GEOTRACES) and to evaluate the effects of major global changes associated with the influences of megacities distributed around the globe. This action can only be performed due to the development of highly sensitive detection methods and the use of clean sampling and handling techniques, together with a joint international program working toward the clear objective of expanding the frontiers of the biogeochemistry of the oceans and related topics, including climate change issues and ocean acidification associated with alterations in the carbon cycle. It is expected that the oceanographic data produced this coming decade will allow a better understanding of biogeochemical cycles, and especially the assessment of changes in trace elements and contaminants in the oceans due to anthropogenic influences, as well as its effects on ecosystems and climate. Computational models are to be constructed to simulate the conditions and processes of the modern oceans and to allow predictions. The environmental changes arising from human activity since the 18th century (also called the Anthropocene) have made the Earth System even more complex. Anthropogenic activities have altered both terrestrial and marine ecosystems, and the legacy of these impacts in the oceans include: a) pollution of the marine environment by solid waste, including plastics; b) pollution by chemical and medical (including those for veterinary use) substances such as hormones, antibiotics, legal and illegal drugs, leading to possible endocrine disruption of marine organisms; and c) ocean acidification, the collateral effect of anthropogenic emissions of CO2 into the atmosphere, irreversible in the human life time scale. Unfortunately, the anthropogenic alteration of the hydrosphere due to inputs of plastics, metal, hydrocarbons, contaminants of emerging concern and even with formerly "exotic" trace elements, such us rare earth elements is likely to accelerate in the near future. These emerging contaminants would likely soon present difficulties for studies in pristine environments. All this knowledge brings with it a great responsibility: helping to envisage viable adaptation and mitigation solutions to the problems identified. The greatest challenge faced by Brazil is currently to create a framework project to develop education, science and technology applied to oceanography and related areas. This framework would strengthen the present working groups and enhance capacity building, allowing a broader Brazilian participation in joint international actions and scientific programs. Recently, the establishment of the National Institutes of Science and Technology (INCTs) for marine science, and the creation of the National Institute of Oceanographic and Hydrological Research represent an exemplary start. However, the participation of the Brazilian academic community in the latest assaults on the frontier of chemical oceanography is extremely limited, largely due to: i. absence of physical infrastructure for the preparation and processing of field samples at ultra-trace level; ii. limited access to oceanographic cruises, due to the small number of Brazilian vessels and/or absence of "clean" laboratories on board; iii. restricted international cooperation; iv. limited analytical capacity of Brazilian institutions for the analysis of trace elements in seawater; v. high cost of ultrapure reagents associated with processing a large number of samples, and vi. lack of qualified technical staff. Advances in knowledge, analytic capabilities and the increasing availability of analytical resources available today offer favorable conditions for chemical oceanography to grow. The Brazilian academic community is maturing and willing to play a role in strengthening the marine science research programs by connecting them with educational and technological initiatives in order to preserve the oceans and to promote the development of society.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The last decade has shown that the global paper industry needs new processes and products in order to reassert its position in the industry. As the paper markets in Western Europe and North America have stabilized, the competition has tightened. Along with the development of more cost-effective processes and products, new process design methods are also required to break the old molds and create new ideas. This thesis discusses the development of a process design methodology based on simulation and optimization methods. A bi-level optimization problem and a solution procedure for it are formulated and illustrated. Computational models and simulation are used to illustrate the phenomena inside a real process and mathematical optimization is exploited to find out the best process structures and control principles for the process. Dynamic process models are used inside the bi-level optimization problem, which is assumed to be dynamic and multiobjective due to the nature of papermaking processes. The numerical experiments show that the bi-level optimization approach is useful for different kinds of problems related to process design and optimization. Here, the design methodology is applied to a constrained process area of a papermaking line. However, the same methodology is applicable to all types of industrial processes, e.g., the design of biorefiners, because the methodology is totally generalized and can be easily modified.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis presents a one-dimensional, semi-empirical dynamic model for the simulation and analysis of a calcium looping process for post-combustion CO2 capture. Reduction of greenhouse emissions from fossil fuel power production requires rapid actions including the development of efficient carbon capture and sequestration technologies. The development of new carbon capture technologies can be expedited by using modelling tools. Techno-economical evaluation of new capture processes can be done quickly and cost-effectively with computational models before building expensive pilot plants. Post-combustion calcium looping is a developing carbon capture process which utilizes fluidized bed technology with lime as a sorbent. The main objective of this work was to analyse the technological feasibility of the calcium looping process at different scales with a computational model. A one-dimensional dynamic model was applied to the calcium looping process, simulating the behaviour of the interconnected circulating fluidized bed reactors. The model incorporates fundamental mass and energy balance solvers to semi-empirical models describing solid behaviour in a circulating fluidized bed and chemical reactions occurring in the calcium loop. In addition, fluidized bed combustion, heat transfer and core-wall layer effects were modelled. The calcium looping model framework was successfully applied to a 30 kWth laboratory scale and a pilot scale unit 1.7 MWth and used to design a conceptual 250 MWth industrial scale unit. Valuable information was gathered from the behaviour of a small scale laboratory device. In addition, the interconnected behaviour of pilot plant reactors and the effect of solid fluidization on the thermal and carbon dioxide balances of the system were analysed. The scale-up study provided practical information on the thermal design of an industrial sized unit, selection of particle size and operability in different load scenarios.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Living organisms manage their resources in well evolutionary-preserved manner to grow and reproduce. Plants are no exceptions, beginning from their seed stage they have to perceive environmental conditions to avoid germination at wrong time or rough soil. Under favourable conditions, plants invest photosynthetic end products in cell and organ growth to provide best possible conditions for generation of offspring. Under natural conditions, however, plants are exposed to a multitude of environmental stress factors, including high light and insufficient light, drought and flooding, various bacteria and viruses, herbivores, and other plants that compete for nutrients and light. To survive under environmental challenges, plants have evolved signaling mechanisms that recognise environmental changes and perform fine-tuned actions that maintain cellular homeostasis. Controlled phosphorylation and dephosphorylation of proteins plays an important role in maintaining balanced flow of information within cells. In this study, I examined the role of protein phosphatase 2A (PP2A) on plant growth and acclimation under optimal and stressful conditions. To this aim, I studied gene expression profiles, proteomes and protein interactions, and their impacts on plant health and survival, taking advantage of the model plant Arabidopsis thaliana and the mutant approach. Special emphasis was made on two highly similar PP2A-B regulatory subunits, B’γ and B’ζ. Promoters of B’γ and B’ζ were found to be similarly active in the developing tissues of the plant. In mature leaves, however, the promoter of B’γ was active in patches in leaf periphery, while the activity of B’ζ promoter was evident in leaf edges. The partially overlapping expression patterns, together with computational models of B’γ and B’ζ within trimeric PP2A holoenzymes suggested that B’γ and B’ζ may competitively bind into similar PP2A trimmers and thus influence each other’s actions. Arabidopsis thaliana pp2a-b’γ and pp2a-b’γζ double mutants showed dwarfish phenotypes, indicating that B’γ and B’ζ are needed for appropriate growth regulation under favorable conditions. However, while pp2a-b’γ displayed constitutive immune responses and appearance of premature yellowings on leaves, the pp2a-b’γζ double mutant supressed these yellowings. More detailed analysis of defense responses revealed that B’γ and B’ζ mediate counteracting effects on salicylic acid dependent defense signalling. Associated with this, B’γ and B’ζ were both found to interact in vivo with CALCIUM DEPENDENT PROTEIN KINASE 1 (CPK1), a crucial element of salicylic acid signalling pathway against pathogens in plants. In addition, B’γ was shown to modulate cellular reactive oxygen species (ROS) metabolism by controlling the abundance of ALTERNATIVE OXIDASE 1A and 1D in mitochondria. PP2A B’γ and B’ζ subunits turned out to play crucial roles in the optimization of plant choices during their development. Taken together, PP2A allows fluent responses to environmental changes, maintenance of plant homeostasis, and grant survivability with minimised cost of redirection of resources from growth to defence.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Les systèmes Matériels/Logiciels deviennent indispensables dans tous les aspects de la vie quotidienne. La présence croissante de ces systèmes dans les différents produits et services incite à trouver des méthodes pour les développer efficacement. Mais une conception efficace de ces systèmes est limitée par plusieurs facteurs, certains d'entre eux sont: la complexité croissante des applications, une augmentation de la densité d'intégration, la nature hétérogène des produits et services, la diminution de temps d’accès au marché. Une modélisation transactionnelle (TLM) est considérée comme un paradigme prometteur permettant de gérer la complexité de conception et fournissant des moyens d’exploration et de validation d'alternatives de conception à des niveaux d’abstraction élevés. Cette recherche propose une méthodologie d’expression de temps dans TLM basée sur une analyse de contraintes temporelles. Nous proposons d'utiliser une combinaison de deux paradigmes de développement pour accélérer la conception: le TLM d'une part et une méthodologie d’expression de temps entre différentes transactions d’autre part. Cette synergie nous permet de combiner dans un seul environnement des méthodes de simulation performantes et des méthodes analytiques formelles. Nous avons proposé un nouvel algorithme de vérification temporelle basé sur la procédure de linéarisation des contraintes de type min/max et une technique d'optimisation afin d'améliorer l'efficacité de l'algorithme. Nous avons complété la description mathématique de tous les types de contraintes présentées dans la littérature. Nous avons développé des méthodes d'exploration et raffinement de système de communication qui nous a permis d'utiliser les algorithmes de vérification temporelle à différents niveaux TLM. Comme il existe plusieurs définitions du TLM, dans le cadre de notre recherche, nous avons défini une méthodologie de spécification et simulation pour des systèmes Matériel/Logiciel basée sur le paradigme de TLM. Dans cette méthodologie plusieurs concepts de modélisation peuvent être considérés séparément. Basée sur l'utilisation des technologies modernes de génie logiciel telles que XML, XSLT, XSD, la programmation orientée objet et plusieurs autres fournies par l’environnement .Net, la méthodologie proposée présente une approche qui rend possible une réutilisation des modèles intermédiaires afin de faire face à la contrainte de temps d’accès au marché. Elle fournit une approche générale dans la modélisation du système qui sépare les différents aspects de conception tels que des modèles de calculs utilisés pour décrire le système à des niveaux d’abstraction multiples. En conséquence, dans le modèle du système nous pouvons clairement identifier la fonctionnalité du système sans les détails reliés aux plateformes de développement et ceci mènera à améliorer la "portabilité" du modèle d'application.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis summarizes the results on the studies on a syntax based approach for translation between Malayalam, one of Dravidian languages and English and also on the development of the major modules in building a prototype machine translation system from Malayalam to English. The development of the system is a pioneering effort in Malayalam language unattempted by previous researchers. The computational models chosen for the system is first of its kind for Malayalam language. An in depth study has been carried out in the design of the computational models and data structures needed for different modules: morphological analyzer , a parser, a syntactic structure transfer module and target language sentence generator required for the prototype system. The generation of list of part of speech tags, chunk tags and the hierarchical dependencies among the chunks required for the translation process also has been done. In the development process, the major goals are: (a) accuracy of translation (b) speed and (c) space. Accuracy-wise, smart tools for handling transfer grammar and translation standards including equivalent words, expressions, phrases and styles in the target language are to be developed. The grammar should be optimized with a view to obtaining a single correct parse and hence a single translated output. Speed-wise, innovative use of corpus analysis, efficient parsing algorithm, design of efficient Data Structure and run-time frequency-based rearrangement of the grammar which substantially reduces the parsing and generation time are required. The space requirement also has to be minimised

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Computational models are arising is which programs are constructed by specifying large networks of very simple computational devices. Although such models can potentially make use of a massive amount of concurrency, their usefulness as a programming model for the design of complex systems will ultimately be decided by the ease in which such networks can be programmed (constructed). This thesis outlines a language for specifying computational networks. The language (AFL-1) consists of a set of primitives, ad a mechanism to group these elements into higher level structures. An implementation of this language runs on the Thinking Machines Corporation, Connection machine. Two significant examples were programmed in the language, an expert system (CIS), and a planning system (AFPLAN). These systems are explained and analyzed in terms of how they compare with similar systems written in conventional languages.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

During fatigue tests of cortical bone specimens, at the unload portion of the cycle (zero stress) non-zero strains occur and progressively accumulate as the test progresses. This non-zero strain is hypothesised to be mostly, if not entirely, describable as creep. This work examines the rate of accumulation of this strain and quantifies its stress dependency. A published relationship determined from creep tests of cortical bone (Journal of Biomechanics 21 (1988) 623) is combined with knowledge of the stress history during fatigue testing to derive an expression for the amount of creep strain in fatigue tests. Fatigue tests on 31 bone samples from four individuals showed strong correlations between creep strain rate and both stress and “normalised stress” (σ/E) during tensile fatigue testing (0–T). Combined results were good (r2=0.78) and differences between the various individuals, in particular, vanished when effects were examined against normalised stress values. Constants of the regression showed equivalence to constants derived in creep tests. The universality of the results, with respect to four different individuals of both sexes, shows great promise for use in computational models of fatigue in bone structures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the most pervading concepts underlying computational models of information processing in the brain is linear input integration of rate coded uni-variate information by neurons. After a suitable learning process this results in neuronal structures that statically represent knowledge as a vector of real valued synaptic weights. Although this general framework has contributed to the many successes of connectionism, in this paper we argue that for all but the most basic of cognitive processes, a more complex, multi-variate dynamic neural coding mechanism is required - knowledge should not be spacially bound to a particular neuron or group of neurons. We conclude the paper with discussion of a simple experiment that illustrates dynamic knowledge representation in a spiking neuron connectionist system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Terahertz (THz) frequency radiation, 0.1 THz to 20 THz, is being investigated for biomedical imaging applications following the introduction of pulsed THz sources that produce picosecond pulses and function at room temperature. Owing to the broadband nature of the radiation, spectral and temporal information is available from radiation that has interacted with a sample; this information is exploited in the development of biomedical imaging tools and sensors. In this work, models to aid interpretation of broadband THz spectra were developed and evaluated. THz radiation lies on the boundary between regions best considered using a deterministic electromagnetic approach and those better analysed using a stochastic approach incorporating quantum mechanical effects, so two computational models to simulate the propagation of THz radiation in an absorbing medium were compared. The first was a thin film analysis and the second a stochastic Monte Carlo model. The Cole–Cole model was used to predict the variation with frequency of the physical properties of the sample and scattering was neglected. The two models were compared with measurements from a highly absorbing water-based phantom. The Monte Carlo model gave a prediction closer to experiment over 0.1 to 3 THz. Knowledge of the frequency-dependent physical properties, including the scattering characteristics, of the absorbing media is necessary. The thin film model is computationally simple to implement but is restricted by the geometry of the sample it can describe. The Monte Carlo framework, despite being initially more complex, provides greater flexibility to investigate more complicated sample geometries.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The different triplet sequences in high molecular weight aromatic copolyimides comprising pyromellitimide units ("I") flanked by either ether-ketone ("K") or ether-sulfone residues ("S") show different binding strengths for pyrene-based tweezer-molecules. Such molecules bind primarily to the diimide unit through complementary π-π-stacking and hydrogen bonding. However, as shown by the magnitudes of 1H NMR complexation shifts and tweezer-polymer binding constants, the triplet "SIS" binds tweezer-molecules more strongly than "KIS" which in turn bind such molecules more strongly than "KIK". Computational models for tweezer-polymer binding, together with single-crystal X-ray analyses of tweezer-complexes with macrocyclic ether-imides, reveal that the variations in binding strength between the different triplet sequences arise from the different conformational preferences of aromatic rings at diarylketone and diarylsulfone linkages. These preferences determine whether or not chain-folding and secondary π−π-stacking occurs between the arms of the tweezermolecule and the 4,4'-biphenylene units which flank the central diimide residue.