32 resultados para Galaxies : Clusters : Individual : Abell 1084
Resumo:
Antiplatelet medication is known to decrease adverse effects in patients with atherothrombotic disease. However, despite ongoing antiplatelet medication considerable number of patients suffer from atherothrombotic events. The aims of the study were 1) to evaluate the individual variability in platelet functions and compare the usability of different methods in detecting it, 2) to assess variability in efficacy of antiplatelet medication with aspirin (acetylsalicylic acid) or the combination of aspirin and clopidogrel and 3) to investigate the main genetic and clinical variables as well as potential underlying mechanisms of variability in efficacy of antiplatelet medication. In comparisons of different platelet function tests in 19 healthy individuals PFA-100® correlated with traditional methods of measuring platelet function and was thus considered appropriate for testing individual variability in platelet activity. Efficacy of ongoing 100mg aspirin daily was studied in 101 patients with coronary artery disease (CAD). Aspirin response was measured with arachidonic acid (AA)-induced platelet aggregation, which reflects cyclo-oxygenase (COX)-1 dependent thromboxane (Tx) A2 formation, and PFA-100®, which evaluates platelet activation under high shear stress in the presence of collagen and epinephrine. Five percent of patients failed to show inhibition of AA-aggregation and 21% of patients had normal PFA-100® results despite aspirin and were thus considered non-responders to aspirin. Interestingly, the two methods of assessing aspirin efficacy, platelet aggregation and PFA-100®, detected different populations as being aspirin non-responders. It could be postulated that PFA-100® actually measures enhanced platelet function, which is not directly associated with TxA2 inhibition exerted by aspirin. Clopidogrel efficacy was assessed in 50 patients who received a 300mg loading dose of clopidogrel 2.5 h prior to percutaneous coronary intervention (PCI) and in 51 patients who were given a loading dose of 300mg combined with a five day treatment of 75mg clopidogrel daily mimicking ongoing treatment. Clopidogrel response was assessed with ADP-induced aggregations, due to its mechanism of action as an inhibitor of ADP-induced activation. When patients received only a loading dose of clopidogrel prior to PCI, 40% did not gain measurable inhibition of their ADP-induced platelet activity (inhibition of 10% or less). Prolongation of treatment so that all patients had reached a plateau of inhibition exerted by clopidogrel, decreased the incidence of non-responders to 20%. Polymorphisms of COX-1 and GP VI, as well as diabetes and female gender, were associated with decreased in vitro aspirin efficacy. Diabetes also impaired the in vitro efficacy of short-term clopidogrel. Decreased response to clopidogrel was associated with limited inhibition by ARMX, an antagonist of P2Y12-receptor, suggesting the reason for clopidogrel resistance to be receptor-dependent. Conclusions: Considerable numbers of CAD patients were non-responders either to aspirin, clopidogrel or both. In the future, platelet function tests may be helpful to individually select effective and safe antiplatelet medication for these patients.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
In this thesis we examine multi-field inflationary models of the early Universe. Since non-Gaussianities may allow for the possibility to discriminate between models of inflation, we compute deviations from a Gaussian spectrum of primordial perturbations by extending the delta-N formalism. We use N-flation as a concrete model; our findings show that these models are generically indistinguishable as long as the slow roll approximation is still valid. Besides computing non-Guassinities, we also investigate Preheating after multi-field inflation. Within the framework of N-flation, we find that preheating via parametric resonance is suppressed, an indication that it is the old theory of preheating that is applicable. In addition to studying non-Gaussianities and preheatng in multi-field inflationary models, we study magnetogenesis in the early universe. To this aim, we propose a mechanism to generate primordial magnetic fields via rotating cosmic string loops. Magnetic fields in the micro-Gauss range have been observed in galaxies and clusters, but their origin has remained elusive. We consider a network of strings and find that rotating cosmic string loops, which are continuously produced in such networks, are viable candidates for magnetogenesis with relevant strength and length scales, provided we use a high string tension and an efficient dynamo.
Resumo:
A repetitive sequence collection is one where portions of a base sequence of length n are repeated many times with small variations, forming a collection of total length N. Examples of such collections are version control data and genome sequences of individuals, where the differences can be expressed by lists of basic edit operations. Flexible and efficient data analysis on a such typically huge collection is plausible using suffix trees. However, suffix tree occupies O(N log N) bits, which very soon inhibits in-memory analyses. Recent advances in full-text self-indexing reduce the space of suffix tree to O(N log σ) bits, where σ is the alphabet size. In practice, the space reduction is more than 10-fold, for example on suffix tree of Human Genome. However, this reduction factor remains constant when more sequences are added to the collection. We develop a new family of self-indexes suited for the repetitive sequence collection setting. Their expected space requirement depends only on the length n of the base sequence and the number s of variations in its repeated copies. That is, the space reduction factor is no longer constant, but depends on N / n. We believe the structures developed in this work will provide a fundamental basis for storage and retrieval of individual genomes as they become available due to rapid progress in the sequencing technologies.
Resumo:
Thin films are the basis of much of recent technological advance, ranging from coatings with mechanical or optical benefits to platforms for nanoscale electronics. In the latter, semiconductors have been the norm ever since silicon became the main construction material for a multitude of electronical components. The array of characteristics of silicon-based systems can be widened by manipulating the structure of the thin films at the nanoscale - for instance, by making them porous. The different characteristics of different films can then to some extent be combined by simple superposition. Thin films can be manufactured using many different methods. One emerging field is cluster beam deposition, where aggregates of hundreds or thousands of atoms are deposited one by one to form a layer, the characteristics of which depend on the parameters of deposition. One critical parameter is deposition energy, which dictates how porous, if at all, the layer becomes. Other parameters, such as sputtering rate and aggregation conditions, have an effect on the size and consistency of the individual clusters. Understanding nanoscale processes, which cannot be observed experimentally, is fundamental to optimizing experimental techniques and inventing new possibilities for advances at this scale. Atomistic computer simulations offer a window to the world of nanometers and nanoseconds in a way unparalleled by the most accurate of microscopes. Transmission electron microscope image simulations can then bridge this gap by providing a tangible link between the simulated and the experimental. In this thesis, the entire process of cluster beam deposition is explored using molecular dynamics and image simulations. The process begins with the formation of the clusters, which is investigated for Si/Ge in an Ar atmosphere. The structure of the clusters is optimized to bring it as close to the experimental ideal as possible. Then, clusters are deposited, one by one, onto a substrate, until a sufficiently thick layer has been produced. Finally, the concept is expanded by further deposition with different parameters, resulting in multiple superimposed layers of different porosities. This work demonstrates how the aggregation of clusters is not entirely understood within the scope of the approximations used in the simulations; yet, it is also shown how the continued deposition of clusters with a varying deposition energy can lead to a novel kind of nanostructured thin film: a multielemental porous multilayer. According to theory, these new structures have characteristics that can be tailored for a variety of applications, with precision heretofore unseen in conventional multilayer manufacture.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
Aerosol particles deteriorate air quality, atmospheric visibility and our health. They affect the Earth s climate by absorbing and scattering sunlight, forming clouds, and also via several feed-back mechanisms. The net effect on the radiative balance is negative, i.e. cooling, which means that particles counteract the effect of greenhouse gases. However, particles are one of the poorly known pieces in the climate puzzle. Some of the airborne particles are natural, some anthropogenic; some enter the atmosphere in particle form, while others form by gas-to-particle conversion. Unless the sources and dynamical processes shaping the particle population are quantified, they cannot be incorporated into climate models. The molecular level understanding of new particle formation is still inadequate, mainly due to the lack of suitable measurement techniques to detect the smallest particles and their precursors. This thesis has contributed to our ability to measure newly formed particles. Three new condensation particle counter applications for measuring the concentration of nano-particles were developed. The suitability of the methods for detecting both charged and electrically neutral particles and molecular clusters as small as 1 nm in diameter was thoroughly tested both in laboratory and field conditions. It was shown that condensation particle counting has reached the size scale of individual molecules, and besides measuring the concentration they can be used for getting size information. In addition to atmospheric research, the particle counters could have various applications in other fields, especially in nanotechnology. Using the new instruments, the first continuous time series of neutral sub-3 nm particle concentrations were measured at two field sites, which represent two different kinds of environments: the boreal forest and the Atlantic coastline, both of which are known to be hot-spots for new particle formation. The contribution of ions to the total concentrations in this size range was estimated, and it could be concluded that the fraction of ions was usually minor, especially in boreal forest conditions. Since the ionization rate is connected to the amount of cosmic rays entering the atmosphere, the relative contribution of neutral to charged nucleation mechanisms extends beyond academic interest, and links the research directly to current climate debate.
Resumo:
Previous research on Human Resource Management (HRM) has focused extensively on the potential relationships between the use of HRM practices and organizational performance. Extant research in HRM has been based on the underlying assumption that HRM practices can enhance organizational performance through their impact on positive employee attitudes and performance, that is, employee reactions to HRM. At the current state of research however, it remains unclear how employees come to perceive and react to HRM practices and to what extent employees in organizations, units and teams react to such practices in similar or widely different ways. In fact, recent HRM studies indicate that employee reactions to HRM may be far less homogeneous than assumed. This raises the question of whether or not the linkage between HRM and organizational outcomes can be explained by employee reactions in terms of attitudes and performance, if these reactions are largely idiosyncratic. Accordingly, this thesis aims to shed light on the processes that shape individuals’ reactions to HRM practices and how these processes may influence the variance or sharedness in such reactions among employees in organizations, units and teams. By theoretically developing and empirically examining the effects of employee perceptions of HRM practices from the perspective of ‘HRM as signaling’ and psychological contract theory, the main contributions of this thesis focus on the following research questions: i) How employee perceptions of the HRM practices relate to individual and collective employee attitudes and performance. ii) How employee perceptions of HRM practices relates to variance in employee attitudes and performance. iii) How collective employee performance mediates the relationship between employee perceptions of HRM practices and organizational performance. Regarding the first research questions the findings indicate that individuals do respond positively to HRM practices by adjusting their felt obligations towards the employer. This finding is in line with the idea of HRM as a signaling device where each HRM practice, implicitly or explicitly, sends signals to employees about promised rewards (inducements) and behaviors (obligations) expected in return. The relationship was also confirmed at the group level of analysis. What is more, variance was found to play an important role in that employee groups with more similar perceptions about the HRM system displayed a stronger relationship between HRM and employee obligations. Concerning the second question the findings were somewhat contradictory in that a strong HRM system was found negatively related to variance in employee performance but not employee obligations. Regarding the third question, the findings confirmed linkages between the HRM system and organizational performance at the group level and the HRM system and employee performance at the individual level. Also, the entire chain of links from the HRM system through variance in employee performance, and further through the level of employee performance to organizational performance was significant.