859 resultados para Collision theory model
Resumo:
In condensed matter systems, the interfacial tension plays a central role for a multitude of phenomena. It is the driving force for nucleation processes, determines the shape and structure of crystalline structures and is important for industrial applications. Despite its importance, the interfacial tension is hard to determine in experiments and also in computer simulations. While for liquid-vapor interfacial tensions there exist sophisticated simulation methods to compute the interfacial tension, current methods for solid-liquid interfaces produce unsatisfactory results.rnrnAs a first approach to this topic, the influence of the interfacial tension on nuclei is studied within the three-dimensional Ising model. This model is well suited because despite its simplicity, one can learn much about nucleation of crystalline nuclei. Below the so-called roughening temperature, nuclei in the Ising model are not spherical anymore but become cubic because of the anisotropy of the interfacial tension. This is similar to crystalline nuclei, which are in general not spherical but more like a convex polyhedron with flat facets on the surface. In this context, the problem of distinguishing between the two bulk phases in the vicinity of the diffuse droplet surface is addressed. A new definition is found which correctly determines the volume of a droplet in a given configuration if compared to the volume predicted by simple macroscopic assumptions.rnrnTo compute the interfacial tension of solid-liquid interfaces, a new Monte Carlo method called ensemble switch method'' is presented which allows to compute the interfacial tension of liquid-vapor interfaces as well as solid-liquid interfaces with great accuracy. In the past, the dependence of the interfacial tension on the finite size and shape of the simulation box has often been neglected although there is a nontrivial dependence on the box dimensions. As a consequence, one needs to systematically increase the box size and extrapolate to infinite volume in order to accurately predict the interfacial tension. Therefore, a thorough finite-size scaling analysis is established in this thesis. Logarithmic corrections to the finite-size scaling are motivated and identified, which are of leading order and therefore must not be neglected. The astounding feature of these logarithmic corrections is that they do not depend at all on the model under consideration. Using the ensemble switch method, the validity of a finite-size scaling ansatz containing the aforementioned logarithmic corrections is carefully tested and confirmed. Combining the finite-size scaling theory with the ensemble switch method, the interfacial tension of several model systems, ranging from the Ising model to colloidal systems, is computed with great accuracy.
Resumo:
In Airbus GmbH (Hamburg) has been developed a new design of Rear Pressure Bulkhead (RPB) for the A320-family. The new model has been formed with vacuum forming technology. During this process the wrinkling phenomenon occurs. In this thesis is described an analytical model for prediction of wrinkling based on the energetic method of Timoshenko. Large deflection theory has been used for analyze two cases of study: a simply supported circular thin plate stamped by a spherical punch and a simply supported circular thin plate formed with vacuum forming technique. If the edges are free to displace radially, thin plates will develop radial wrinkles near the edge at a central deflection approximately equal to four plate thicknesses w0/ℎ≈4 if they’re stamped by a spherical punch and w0/ℎ≈3 if they’re formed with vacuum forming technique. Initially, there are four symmetrical wrinkles, but the number increases if the central deflection is increased. By using experimental results, the “Snaptrhough” phenomenon is described.
Resumo:
The purpose of this thesis is to present some fundamental results about model categories, and to give some examples of categories that can be equipped with a model structure.
Resumo:
The theory of ecological speciation suggests that assortative mating evolves most easily when mating preferences are;directly linked to ecological traits that are subject to divergent selection. Sensory adaptation can play a major role in this process,;because selective mating is often mediated by sexual signals: bright colours, complex song, pheromone blends and so on. When;divergent sensory adaptation affects the perception of such signals, mating patterns may change as an immediate consequence.;Alternatively, mating preferences can diverge as a result of indirect effects: assortative mating may be promoted by selection;against intermediate phenotypes that are maladapted to their (sensory) environment. For Lake Victoria cichlids, the visual environment;constitutes an important selective force that is heterogeneous across geographical and water depth gradients. We investigate;the direct and indirect effects of this heterogeneity on the evolution of female preferences for alternative male nuptial colours;(red and blue) in the genus Pundamilia. Here, we review the current evidence for divergent sensory drive in this system, extract;general principles, and discuss future perspectives
Resumo:
The SVWN, BVWN, BP86, BLYP, BPW91, B3P86, B3LYP, B3PW91, B1LYP, mPW1PW, and PBE1PBE density functionals, as implemented in Gaussian 98 and Gaussian 03, were used to calculate ΔG0 and ΔH0 values for 17 deprotonation reactions where the experimental values are accurately known. The PBE1PBE and B3P86 functionals are shown to compute results with accuracy comparable to more computationally intensive compound model chemistries. A rationale for the relative performance of various functionals is explored.
Resumo:
New geochronologic, geochemical, sedimentologic, and compositional data from the central Wrangell volcanic belt (WVB) document basin development and volcanism linked to subduction of overthickened oceanic crust to the northern Pacific plate margin. The Frederika Formation and overlying Wrangell Lavas comprise >3 km of sedimentary and volcanic strata exposed in the Wrangell Mountains of south-central Alaska (United States). Measured stratigraphic sections and lithofacies analyses document lithofacies associations that reflect deposition in alluvial-fluvial-lacustrine environments routinely influenced by volcanic eruptions. Expansion of intrabasinal volcanic centers prompted progradation of vent-proximal volcanic aprons across basinal environments. Coal deposits, lacustrine strata, and vertical juxtaposition of basinal to proximal lithofacies indicate active basin subsidence that is attributable to heat flow associated with intrabasinal volcanic centers and extension along intrabasinal normal faults. The orientation of intrabasinal normal faults is consistent with transtensional deformation along the Totschunda-Fairweather fault system. Paleocurrents, compositional provenance, and detrital geochronologic ages link sediment accumulation to erosion of active intrabasinal volcanoes and to a lesser extent Mesozoic igneous sources. Geochemical compositions of interbedded lavas are dominantly calc-alkaline, range from basaltic andesite to rhyolite in composition, and share geochemical characteristics with Pliocene-Quaternary phases of the western WVB linked to subduction-related magmatism. The U/Pb ages of tuffs and Ar-40/Ar-39 ages of lavas indicate that basin development and volcanism commenced by 12.5-11.0 Ma and persisted until at least ca. 5.3 Ma. Eastern sections yield older ages (12.5-9.3 Ma) than western sections (9.6-8.3 Ma). Samples from two western sections yield even younger ages of 5.3 Ma. Integration of new and published stratigraphic, geochronologic, and geochemical data from the entire WVB permits a comprehensive interpretation of basin development and volcanism within a regional tectonic context. We propose a model in which diachronous volcanism and transtensional basin development reflect progressive insertion of a thickened oceanic crustal slab of the Yakutat microplate into the arcuate continental margin of southern Alaska coeval with reported changes in plate motions. Oblique northwestward subduction of a thickened oceanic crustal slab during Oligocene to Middle Miocene time produced transtensional basins and volcanism along the eastern edge of the slab along the Duke River fault in Canada and subduction-related volcanism along the northern edge of the slab near the Yukon-Alaska border. Volcanism and basin development migrated progressively northwestward into eastern Alaska during Middle Miocene through Holocene time, concomitant with a northwestward shift in plate convergence direction and subduction collision of progressively thicker crust against the syntaxial plate margin.
Resumo:
Large-scale simulations and analytical theory have been combined to obtain the nonequilibrium velocity distribution, f(v), of randomly accelerated particles in suspension. The simulations are based on an event-driven algorithm, generalized to include friction. They reveal strongly anomalous but largely universal distributions, which are independent of volume fraction and collision processes, which suggests a one-particle model should capture all the essential features. We have formulated this one-particle model and solved it analytically in the limit of strong damping, where we find that f (v) decays as 1/v for multiple decades, eventually crossing over to a Gaussian decay for the largest velocities. Many particle simulations and numerical solution of the one-particle model agree for all values of the damping.
Search for a standard model Higgs boson in the H→ZZ→ℓ(+)ℓ(-)νν decay channel with the ATLAS detector
Resumo:
A search for a heavy standard model Higgs boson decaying via H→ZZ→→ℓ(+)ℓ(-)νν, where ℓ=e, μ, is presented. It is based on proton-proton collision data at √s=7 TeV, collected by the ATLAS experiment at the LHC in the first half of 2011 and corresponding to an integrated luminosity of 1.04 fb(-1). The data are compared to the expected standard model backgrounds. The data and the background expectations are found to be in agreement and upper limits are placed on the Higgs boson production cross section over the entire mass window considered; in particular, the production of a standard model Higgs boson is excluded in the region 340
Resumo:
Introduction: Advances in biotechnology have shed light on many biological processes. In biological networks, nodes are used to represent the function of individual entities within a system and have historically been studied in isolation. Network structure adds edges that enable communication between nodes. An emerging fieldis to combine node function and network structure to yield network function. One of the most complex networks known in biology is the neural network within the brain. Modeling neural function will require an understanding of networks, dynamics, andneurophysiology. It is with this work that modeling techniques will be developed to work at this complex intersection. Methods: Spatial game theory was developed by Nowak in the context of modeling evolutionary dynamics, or the way in which species evolve over time. Spatial game theory offers a two dimensional view of analyzingthe state of neighbors and updating based on the surroundings. Our work builds upon this foundation by studying evolutionary game theory networks with respect to neural networks. This novel concept is that neurons may adopt a particular strategy that will allow propagation of information. The strategy may therefore act as the mechanism for gating. Furthermore, the strategy of a neuron, as in a real brain, isimpacted by the strategy of its neighbors. The techniques of spatial game theory already established by Nowak are repeated to explain two basic cases and validate the implementation of code. Two novel modifications are introduced in Chapters 3 and 4 that build on this network and may reflect neural networks. Results: The introduction of two novel modifications, mutation and rewiring, in large parametricstudies resulted in dynamics that had an intermediate amount of nodes firing at any given time. Further, even small mutation rates result in different dynamics more representative of the ideal state hypothesized. Conclusions: In both modificationsto Nowak's model, the results demonstrate the network does not become locked into a particular global state of passing all information or blocking all information. It is hypothesized that normal brain function occurs within this intermediate range and that a number of diseases are the result of moving outside of this range.
Resumo:
In business literature, the conflicts among workers, shareholders and the management have been studied mostly in the frame of stakeholder theory. The stakeholder theory recognizes this issue as an agency problem, and tries to solve the problem by establishing a contractual relationship between the agent and principals. However, as Marcoux pointed out, the appropriateness of the contract as a medium to reduce the agency problem should be questioned. As an alternative, the cooperative model minimizes the agency costs by integrating the concept of workers, owners and management. Mondragon Corporation is a successful example of the cooperative model which grew into the sixth largest corporation in Spain. However, the cooperative model has long been ignored in discussions of corporate governance, mainly because the success of the cooperative model is extremely difficult to duplicate in reality. This thesis hopes to revitalize the scholarly examination of cooperatives by developing a new model that overcomes the fundamental problem in the cooperative model: the limited access to capital markets. By dividing the ownership interest into financial and control interest, the dual ownership structure allows cooperatives to issue stock in the capital market by making a financial product out of financial interest.
Resumo:
Collision-induced dissociation (CID) of peptides using tandem mass spectrometry (MS) has been used to determine the identity of peptides and other large biological molecules. Mass spectrometry (MS) is a useful tool for determining the identity of molecules based on their interaction with electromagnetic fields. If coupled with another method like infrared (IR) vibrational spectroscopy, MS can provide structural information, but in its own right, MS can only provide the mass-to-charge (m/z) ratio of the fragments produced, which may not be enough information to determine the mechanism of the collision-induced dissociation (CID) of the molecule. In this case, theoretical calculations provide a useful companion for MS data and yield clues about the energetics of the dissociation. In this study, negative ion electrospray tandem MS was used to study the CID of the deprotonated dipeptide glycine-serine (Gly-Ser). Though negative ion MS is not as popular a choice as positive ion MS, studies by Bowie et al. show that it yields unique clues about molecular structure which complement positive ion spectroscopy, such as characteristic fragmentations like the loss of formaldehyde from the serine residue.2 The increase in the collision energy in the mass spectrometer alters the flexibility of the dipeptide backbone, enabling isomerizations (reactions not resulting in a fragment loss) and dissociations to take place. The mechanism of the CID of Gly-Ser was studied using two computational methods, B3LYP/6-311+G* and M06-2X/6-311++G**. The main pathway for molecular dissociation was analyzed in 5 conformers in an attempt to verify the initial mechanism proposed by Dr. James Swan after examination of the MS data. The results suggest that the loss of formaldehyde from serine, which Bowie et al. indicates is a characteristic of the presence of serine in a protein residue, is an endothermic reaction that is made possible by the conversion of the translational energy of the ion into internal energy as the ion collides with the inert collision gas. It has also been determined that the M06-2X functional¿s improved description of medium and long-range correlation makes it more effective than the B3LYP functional at finding elusive transition states. M06-2X also more accurately predicts the energy of those transition states than does B3LYP. A second CID mechanism, which passes through intermediates with the same m/z ratio as the main pathway for molecular dissociation, but different structures, including a diketopiperazine intermediate, was also studied. This pathway for molecular dissociation was analyzed with 3 conformers and the M06-2X functional, due to its previously determined effectiveness. The results suggest that the latter pathway, which meets the same intermediate masses as the first mechanism, is lower in overall energy and therefore a more likely pathway of dissociation than the first mechanism.
Resumo:
Mr. Pechersky set out to examine a specific feature of the employer-employee relationship in Russian business organisations. He wanted to study to what extent the so-called "moral hazard" is being solved (if it is being solved at all), whether there is a relationship between pay and performance, and whether there is a correlation between economic theory and Russian reality. Finally, he set out to construct a model of the Russian economy that better reflects the way it actually functions than do certain other well-known models (for example models of incentive compensation, the Shapiro-Stiglitz model etc.). His report was presented to the RSS in the form of a series of manuscripts in English and Russian, and on disc, with many tables and graphs. He begins by pointing out the different examples of randomness that exist in the relationship between employee and employer. Firstly, results are frequently affected by circumstances outside the employee's control that have nothing to do with how intelligently, honestly, and diligently the employee has worked. When rewards are based on results, uncontrollable randomness in the employee's output induces randomness in their incomes. A second source of randomness involves the outside events that are beyond the control of the employee that may affect his or her ability to perform as contracted. A third source of randomness arises when the performance itself (rather than the result) is measured, and the performance evaluation procedures include random or subjective elements. Mr. Pechersky's study shows that in Russia the third source of randomness plays an important role. Moreover, he points out that employer-employee relationships in Russia are sometimes opposite to those in the West. Drawing on game theory, he characterises the Western system as follows. The two players are the principal and the agent, who are usually representative individuals. The principal hires an agent to perform a task, and the agent acquires an information advantage concerning his actions or the outside world at some point in the game, i.e. it is assumed that the employee is better informed. In Russia, on the other hand, incentive contracts are typically negotiated in situations in which the employer has the information advantage concerning outcome. Mr. Pechersky schematises it thus. Compensation (the wage) is W and consists of a base amount, plus a portion that varies with the outcome, x. So W = a + bx, where b is used to measure the intensity of the incentives provided to the employee. This means that one contract will be said to provide stronger incentives than another if it specifies a higher value for b. This is the incentive contract as it operates in the West. The key feature distinguishing the Russian example is that x is observed by the employer but is not observed by the employee. So the employer promises to pay in accordance with an incentive scheme, but since the outcome is not observable by the employee the contract cannot be enforced, and the question arises: is there any incentive for the employer to fulfil his or her promises? Mr. Pechersky considers two simple models of employer-employee relationships displaying the above type of information symmetry. In a static framework the obtained result is somewhat surprising: at the Nash equilibrium the employer pays nothing, even though his objective function contains a quadratic term reflecting negative consequences for the employer if the actual level of compensation deviates from the expectations of the employee. This can lead, for example, to labour turnover, or the expenses resulting from a bad reputation. In a dynamic framework, the conclusion can be formulated as follows: the higher the discount factor, the higher the incentive for the employer to be honest in his/her relationships with the employee. If the discount factor is taken to be a parameter reflecting the degree of (un)certainty (the higher the degree of uncertainty is, the lower is the discount factor), we can conclude that the answer to the formulated question depends on the stability of the political, social and economic situation in a country. Mr. Pechersky believes that the strength of a market system with private property lies not just in its providing the information needed to compute an efficient allocation of resources in an efficient manner. At least equally important is the manner in which it accepts individually self-interested behaviour, but then channels this behaviour in desired directions. People do not have to be cajoled, artificially induced, or forced to do their parts in a well-functioning market system. Instead, they are simply left to pursue their own objectives as they see fit. Under the right circumstances, people are led by Adam Smith's "invisible hand" of impersonal market forces to take the actions needed to achieve an efficient, co-ordinated pattern of choices. The problem is that, as Mr. Pechersky sees it, there is no reason to believe that the circumstances in Russia are right, and the invisible hand is doing its work properly. Political instability, social tension and other circumstances prevent it from doing so. Mr. Pechersky believes that the discount factor plays a crucial role in employer-employee relationships. Such relationships can be considered satisfactory from a normative point of view, only in those cases where the discount factor is sufficiently large. Unfortunately, in modern Russia the evidence points to the typical discount factor being relatively small. This fact can be explained as a manifestation of aversion to risk of economic agents. Mr. Pechersky hopes that when political stabilisation occurs, the discount factors of economic agents will increase, and the agent's behaviour will be explicable in terms of more traditional models.
Resumo:
Is there a psychological basis for teaching and learning in the context of a liberal education, and if so, what might such a psychological basis look like? Traditional teaching and assessment often emphasize remembering facts and, to some extent, analyzing ideas. Such skills are important, but they leave out of the aspects of thinking that are most important not only in liberal education, but in life, in general. In this article, I propose a theory called WICS, which is an acronym for wisdom, intelligence, and creativity, synthesized. The basic idea underlying this theory is that, through liberal education, students need to acquire creative skills and attitudes to generate new ideas about how to adapt flexibly to a rapidly changing world, analytical skills and attitudes to ascertain whether these new ideas are good ones, practical skills and attitudes to implement the new ideas and convince others of their value, and wisdom-based skills and attitudes in order to ensure that the new ideas help to achieve a common good through the infusion of positive ethical values.