996 resultados para Structural Ledge Theories
Resumo:
The edge-to-edge matching crystallographic model has been used to predict all the orientation relationships (OR) between crystals that have simple hexagonal close packed (HCP) and body-centered cubic (BCC) structures. Using the critical values for the interatomic spacing misfit along the matching directions and the cl-value mismatch between matching planes, the model predicted all the four common ORs, namely the Burgers OR, the Potter OR, the Pitsch-Schrader OR and the Rong Dunlop OR, together with the corresponding habit planes. Taking the c(H)/a(H) and a(H)/a(B) ratios as variables, where H and B denote the HCP and BCC structures respectively, the model also predicted the relationship between these variables and the four ORs. These predictions are perfectly consistent with the published experimental results. As was the case in the FCC/BCC system, the edge-to-edge matching model has been shown to be a powerful tool for predicting the crystallographic features of diffusion-controlled phase transformations. (C) 2004 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Resumo:
The basis of the present authors' edge-to-edge matching model for understanding the crystallography of partially coherent precipitates is the minimization of the energy of the interface between the two phases. For relatively simple crystal structures, this energy minimization occurs when close-packed, or relatively close-packed, rows of atoms match across the interface. Hence, the fundamental principle behind edge-to-edge matching is that the directions in each phase that correspond to the edges of the planes that meet in the interface should be close-packed, or relatively close-packed, rows of atoms. A few of the recently reported examples of what is termed edge-to-edge matching appear to ignore this fundamental principle. By comparing theoretical predictions with available experimental data, this article will explore the validity of this critical atom-row coincidence condition, in situations where the two phases have simple crystal Structures and in those where the precipitate has a more complex structure.
Resumo:
Using a synthesis of the functional integral and operator approaches we discuss the fermion-buson mapping and the role played by the Bose field algebra in the Hilbert space of two-dimensional gauge and anomalous gauge field theories with massive fermions. In QED, with quartic self-interaction among massive fermions, the use of an auxiliary vector field introduces a redundant Bose field algebra that should not be considered as an element of the intrinsic algebraic structure defining the model. In anomalous chiral QED, with massive fermions the effect of the chiral anomaly leads to the appearance in the mass operator of a spurious Bose field combination. This phase factor carries no fermion selection rule and the expected absence of Theta-vacuum in the anomalous model is displayed from the operator solution. Even in the anomalous model with massive Fermi fields, the introduction of the Wess-Zumino field replicates the theory, changing neither its algebraic content nor its physical content. (C) 2002 Elsevier B.V. (USA).
Resumo:
This paper shows that many structural remedies in a sample of European merger cases result in market structures which would probably not be cleared by the Competition Authority (CA) if they were the result of merger (rather than remedy).This is explained by the fact that the CA’s objective through remedy is to restore premerger competition, but markets are often highly concentrated even before merger. If so, the CA must often choose between clearing an ‘uncompetitive’merger, or applying an unsatisfactory remedy. Here, the CA appears reluctant to intervene against coordinated effects, if doing so enhances a leader’s dominance.
Resumo:
Previous empirical assessments of the effectiveness of structural merger remedies have focused mainly on the subsequent viability of the divested assets. Here, we take a different approach by examining how competitive are the market structures which result from the divestments. We employ a tightly specified sample of markets in which the European Commission (EC) has imposed structural merger remedies. It has two key features: (i) it includes all mergers in which the EC appears to have seriously considered, simultaneously, the possibility of collective dominance, as well as single dominance; (ii) in a previous paper, for the same sample, we estimated a model which proved very successful in predicting the Commission’s merger decisions, in terms of the market shares of the leading firms. The former allows us to explore the choices between alternative theories of harm, and the latter provides a yardstick for evaluating whether markets are competitive or not – at least in the eyes of the Commission. Running the hypothetical post-remedy market shares through the model, we can predict whether the EC would have judged the markets concerned to be competitive, had they been the result of a merger rather than a remedy. We find that a significant proportion were not competitive in this sense. One explanation is that the EC has simply been inconsistent – using different criteria for assessing remedies from those for assessing the mergers in the first place. However, a more sympathetic – and in our opinion, more likely – explanation is that the Commission is severely constrained by the pre-merger market structures in many markets. We show that, typically, divestment remedies return the market to the same structure as existed before the proposed merger. Indeed, one can argue that any competition authority should never do more than this. Crucially, however, we find that this pre-merger structure is often itself not competitive. We also observe an analogous picture in a number of markets where the Commission chose not to intervene: while the post-merger structure was not competitive, nor was the pre-merger structure. In those cases, however, the Commission preferred the former to the latter. In effect, in both scenarios, the EC was faced with a no-win decision. This immediately raises a follow-up question: why did the EC intervene for some, but not for others – given that in all these cases, some sort of anticompetitive structure would prevail? We show that, in this sample at least, the answer is often tied to the prospective rank of the merged firm post-merger. In particular, in those markets where the merged firm would not be the largest post-merger, we find a reluctance to intervene even where the resulting market structure is likely to be conducive to collective dominance. We explain this by a willingness to tolerate an outcome which may be conducive to tacit collusion if the alternative is the possibility of an enhanced position of single dominance by the market leader. Finally, because the sample is confined to cases brought under the ‘old’ EC Merger Regulation, we go on to consider how, if at all, these conclusions require qualification following the 2004 revisions, which, amongst other things, made interventions for non-coordinated behaviour possible without requiring that the merged firm be a dominant market leader. Our main conclusions here are that the Commission appears to have been less inclined to intervene in general, but particularly for Collective Dominance (or ‘coordinated effects’ as it is now known in Europe as well as the US.) Moreover, perhaps contrary to expectation, where the merged firm is #2, the Commission has to date rarely made a unilateral effects decision and never made a coordinated effects decision.
Resumo:
The recent wave of upheavals and revolts in Northern Africa and the Middle East goes back to an old question often raised by theories of collective action: does repression act as a negative or positive incentive for further mobilization? Through a review of the vast literature devoted to this question, this article aims to go beyond theoretical and methodological dead-ends. The article moves on to non-Western settings in order to better understand, via a macro-sociological and dynamic approach, the causal effects between mobilizations and repression. It pleads for a meso- and micro-level approach to this issue: an approach that puts analytical emphasis both on protest organizations and on individual activists' careers.
Resumo:
Intuitively, we think of perception as providing us with direct cognitive access to physical objects and their properties. But this common sense picture of perception becomes problematic when we notice that perception is not always veridical. In fact, reflection on illusions and hallucinations seems to indicate that perception cannot be what it intuitively appears to be. This clash between intuition and reflection is what generates the puzzle of perception. The task and enterprise of unravelling this puzzle took, and still takes, centre stage in the philosophy of perception. The goal of my dissertation is to make a contribution to this enterprise by formulating and defending a new structural approach to perception and perceptual consciousness. The argument for my structural approach is developed in several steps. Firstly, I develop an empirically inspired causal argument against naïve and direct realist conceptions of perceptual consciousness. Basically, the argument says that perception and hallucination can have the same proximal causes and must thus belong to the same mental kind. I emphasise that this insight gives us good reasons to abandon what we are instinctively driven to believe - namely that perception is directly about the outside physical world. The causal argument essentially highlights that the information that the subject acquires in perceiving a worldly object is always indirect. To put it another way, the argument shows that what we, as perceivers, are immediately aware of, is not an aspect of the world but an aspect of our sensory response to it. A view like this is traditionally known as a Representative Theory of Perception. As a second step, emphasis is put on the task of defending and promoting a new structural version of the Representative Theory of Perception; one that is immune to some major objections that have been standardly levelled at other Representative Theories of Perception. As part of this defence and promotion, I argue that it is only the structural features of perceptual experiences that are fit to represent the empirical world. This line of thought is backed up by a detailed study of the intriguing phenomenon of synaesthesia. More precisely, I concentrate on empirical cases of synaesthetic experiences and argue that some of them provide support for a structural approach to perception. The general picture that emerges in this dissertation is a new perspective on perceptual consciousness that is structural through and through.
Studies on the structural, electrical and magnetic properties of composites based on spinel ferrites
Resumo:
This thesis mainly deals with the preparation and studies on magnetic composites based on spinel ferrites prepared both chemically and mechanically. Rubber ferrite composites (RFC) are chosen because of their mouldability and flexibility and the ease with which the dielectric and magnetic properties can be manipulated to make them as useful devices. Natural rubber is chosen as the Matrix because of its local availability and possible value addition. Moreover, NR represents a typical unsaturated nonpolar matrix. The work can be thought of as two parts. Part l concentrates on the preparation and characterization of nanocomposites based on y-Fe203. Part 2 deals with the preparation and characterization of RFCs containing Nickel zinc ferrit In the present study magnetic nanocomposites have been prepared by ionexchange method and the preparation conditions have been optimized. The insitu incorporation of the magnetic component is carried out chemically. This method is selected as it is the easiest and simplest method for preparation of nanocomposite. Nanocomposite samples thus prepared were studied using VSM, Mossbauer spectroscopy, Iron content estimation, and ESR spectroscopy. For the preparation of RFCs, the filler material namely nickel zinc ferrite having the general formula Ni)_xZnxFez04, where x varies from 0 to 1 in steps of 0.2 have been prepared by the conventional ceramic techniques. The system of Nil_xZn"Fe204 is chosen because of their excellent high frequency characteristics. After characterization they are incorporated into the polymer matrix of natural rubber by mechanical method. The incorporation is done according to a specific recipe and for various Loadings of magnetic fillers and also for all compositions. The cure characteristics, magnetic properties and dielectric properties of these composites are evaluated. The ac electrical conductivity of both ceramic nickel zinc ferrites and rubber ferrite composites are also calculated using a simple relation. The results are correlated.
Resumo:
Aqua complex ions of metals must have existed since the appearance of water on the earth, and the subsequent appearance of life depended on, and may even have resulted from the interaction of metal ions with organic molecules. Studies on the coordinating ability of metal ions with other molecules and anions culminated in the theories of/\lfred Werner. Thereon the progress in the studies of metal complex chemistry was rapid. Many factors, like the utility and economic importance of metal chemistry, the intrinsic interest _in many of the compounds and the intellectual challenge of the structural problems to be solved, have contributed to this rapid progress. X—ray diffraction studies further accelerated the progress. The work cited in this thesis was carried out by the author in the Department of Applied Chemistry during 2001-2004. The primary aim of these investigations was to synthesise and characterize some transition metal complexes of 2-benzoylpyridine N(4)-substituted thiosemicarbazones and to study the antimicrobial activities of the ligands and their metal complexes. The work is divided into eight chapters
Resumo:
Two studies investigated the degree to which the relationship between rapid automatized naming (RAN) performance and reading development is driven by shared phonological processes. Study 1 assessed RAN, phonological awareness, and reading performance in 1010 7- to -10 year-olds. Results showed that RAN deficits occurred in the absence of phonological awareness deficits. These were accompanied by modest reading delays. In structural equation modeling, solutions where RAN was subsumed within a phonological processing factor did not provide a good fit to the data, suggesting that processes outside phonology may drive RAN performance and its association with reading. Study 2 investigated Kail’s proposal that speed of processing underlies this relationship. Children with single RAN deficits showed slower speed of processing than did closely matched controls performing normally on RAN. However, regression analysis revealed that RAN made a unique contribution to reading even after accounting for processing speed. Theoretical implications are discussed.
Resumo:
Two studies investigated the degree to which the relationship between Rapid Automatized Naming (RAN) performance and reading development is driven by shared phonological processes. Study 1 assessed RAN, phonological awareness and reading performance in 1010 children aged 7-10 years. Results showed that RAN deficits occurred in the absence of phonological awareness deficits. These were accompanied by modest reading delays. In structural equation modeling, solutions where RAN was subsumed within a phonological processing factor did not provide a good fit to the data, suggesting that processes outside phonology may drive RAN performance and its association with reading. Study 2 investigated Kail's (1991) proposal that speed of processing underlies this relationship. Children with single RAN deficits showed slower speed of processing than closely matched controls performing normally on RAN. However, regression analysis revealed that RAN made a unique contribution to reading even after accounting for processing speed. Theoretical implications are discussed.
Resumo:
Includes bibliography
Resumo:
The construction of a Gothic vault implied the solution of several technical challenges. The literature on Gothic vault construction is quite large and its growth continues steadily. The main challenge of any structure is that, during and after construction, it must be "safe", that is, it must not collapse. Indeed, it must be amply safe, able to support different loads for long periods of time. Masonry architecture has shown its structural safety for centuries or millennia. The Pantheon of Rome stands today after almost 2,000 years without having needed any structural reinforcement (of course, the survival of any building implies continuous maintenance) . Hagia Sophia in Istanbul, finished in the 6th century AD, has withstood not only the dead loads but also many severe earthquakes . Finally, the Gothic cathedrals, with their appearance of weakness, are• more than a half millennium old. The question arises of what the source of this amazing strength is and how the illiterate master masons were able to design such daring and safe structures . This question is usually evaded in manuals of Gothic architecture. This is quite surprising, the structure being a fundamental part of Gothic buildings. The present article aims to give such an explanation, which has been studied in detail elsewhere. In the first part, the Gothic design methods "V ill be discussed. In the second part, the validity of these methods wi11 be verified within the frame of the modern theory of masonry structures . References have been reduced to a minimum to make the text simpler and more direct.
Resumo:
The inner oval dome of the Basílica de la Virgen los Desamparados, built in 1701, is one of the most slender masonry vaults ever built. It is a tile dome with a total thickness of 80 mm and a main span of 18.50 m. It was built without centering with great ingenuity and economy of means, thirty three years after the termination of the building in 1667. The dome is in contact with the external dome only in the inferior part with the projecting ribs of the intrados, the lunettes of the windows, and, in the upper part, through 126 inclined iron bars. This unique construction was revealed in the 1990's in the studies previous to the restoration of the Basílica, and has given rise to different theories about the mode of construction and the structural behaviour and safety of the dome. The present contribution aims to provide a plausible hypothesis about the mode of construction and to explain the safety of the inner dome which has stood, without need of repairs or reinforcement, for 300 hundred years.
Resumo:
Plate-bandes are straight masonry arches (they are called, also, flat arches or lintel arches). Ideally they have the surfaces of extrados and intrados plane and horizontal. The stones or bricks have radial joints converging usually in one centre. The voussoirs have the form of wedges and in French they are called "claveaux". A plate-bande is, in fact, a lintel made of several stones and the proportions of lintels and plate-bandes are similar. Proportions of plate-bandes, that is the relationship between the thickness t and the span s (t/s)varies, typically between 1/4–1/3 in thick plate-bandes, and is less than 1/20 in the most slender ones. A ratio of circa 1/8 was usual in the 18th Century and follows a simple geometrical rule: the centre form with the intrados an equilateral triangle and the plate-bande should contain an arc of circle. The joints are usually plane, but in some cases present a «rebated» or «stepped» form. Plate-bandes exert an inclined thrust as any masonry arch. This thrust is usually very high and it requires either massive buttresses, or to be built in the middle of thick walls. Master builders and architects have tried since antiquity to calculate the abutment necessary for any arch. A modern architect or engineer will measure the arch thrust in units of force, kN or tons. Traditionally, the thrust has been measured as the size of the buttresses to resist it safely. Old structural rules, then, addressed the design problem establishing a relationship between the span and the depth of the buttress. These were empirical rules, particular for every type of arch or structure in every epoch. Thus, the typical gothic buttress is 1/4 of the vault span, but a Renaissance or baroque barrel vault will need more than 1/3 of the span. A plate-bande would require more than one half of the span; this is precisely the rule cited by the French engineer Gautier, who tried unsuccessfully to justify it by static reasons. They were used, typically, to form the lintels of windows or doors (1-2 m, typically); in Antiquity they were used, also, though rarely, at the gates of city walls or in niches (ca. 2 m, reaching 5.2 m). Plate-bandes may show particular problems: it is not unusual that some sliding of the voussoirs can be observed, particularly in thick plate-bandes. The stepped joints on Fig. 1, left, were used to avoid this problem. There are other «hidden» methods, like iron cramps or the use of stone wedges, etc. In seismic zones these devices were usual. Another problem relates to the deformation; a slight yielding of the abutments, or even the compression of the mortar joints, may lead to some cracking and the descent of the central keystone. Even a tiny descent will convert the original straight line of the intrados in a broken line with a visible «kink» or angle in the middle. Of course, both problems should be avoided. Finally, the wedge form of the voussoirs lead to acute angles in the stones and this can produce partial fractures; this occurs usually at the inferior border of the springers at the abutments. It follows, that to build a successful plate-bande is not an easy matter. Also, the structural study of plate-bandes is far from simple, and mechanics and geometry are related in a particular way. In the present paper we will concentrate on the structural aspects and their constructive consequences, with a historical approach. We will outline the development of structural analysis of plate-bandes from ca. 1700 until today. This brief history has a more than purely academic interest. Different approaches and theories pointed to particular problem, and though the solution given may have been incorrect, the question posed was often pertinent. The paper ends with the application of modern Limit Analysis of Masonry Structures, developed mainly by professor Heyman in the last fifty years. The work aims, also, to give some clues for the actual architect and engineer involved in the analysis or restoration of masonry buildings.