886 resultados para rhetorical structure theory
Resumo:
This research primarily represents a contribution to the lobbying regulation research arena. It introduces an index which for the first time attempts to measure the direct compliance costs of lobbying regulation. The Cost Indicator Index (CII) offers a brand new platform for qualitative and quantitative assessment of adopted lobbying laws and proposals of those laws, both in the comparative and the sui generis dimension. The CII is not just the only new tool introduced in the last decade, but it is the only tool available for comparative assessments of the costs of lobbying regulations. Beside the qualitative contribution, the research introduces an additional theoretical framework for complementary qualitative analysis of the lobbying laws. The Ninefold theory allows a more structured assessment and classification of lobbying regulations, both by indication of benefits and costs. Lastly, this research introduces the Cost-Benefit Labels (CBL). These labels might improve an ex-ante lobbying regulation impact assessment procedure, primarily in the sui generis perspective. In its final part, the research focuses on four South East European countries (Slovenia, Serbia, Montenegro and Macedonia), and for the first time brings them into the discussion and calculates their CPI and CII scores. The special focus of the application was on Serbia, whose proposal on the Law on Lobbying has been extensively analysed in qualitative and quantitative terms, taking into consideration specific political and economic circumstances of the country. Although the obtained results are of an indicative nature, the CII will probably find its place within the academic and policymaking arena, and will hopefully contribute to a better understanding of lobbying regulations worldwide.
Resumo:
This research was designed to answer the question of which direction the restructuring of financial regulators should take – consolidation or fragmentation. This research began by examining the need for financial regulation and its related costs. It then continued to describe what types of regulatory structures exist in the world; surveying the regulatory structures in 15 jurisdictions, comparing them and discussing their strengths and weaknesses. This research analyzed the possible regulatory structures using three methodological tools: Game-Theory, Institutional-Design, and Network-Effects. The incentives for regulatory action were examined in Chapter Four using game theory concepts. This chapter predicted how two regulators with overlapping supervisory mandates will behave in two different states of the world (where they can stand to benefit from regulating and where they stand to lose). The insights derived from the games described in this chapter were then used to analyze the different supervisory models that exist in the world. The problem of information-flow was discussed in Chapter Five using tools from institutional design. The idea is based on the need for the right kind of information to reach the hands of the decision maker in the shortest time possible in order to predict, mitigate or stop a financial crisis from occurring. Network effects and congestion in the context of financial regulation were discussed in Chapter Six which applied the literature referring to network effects in general in an attempt to conclude whether consolidating financial regulatory standards on a global level might also yield other positive network effects. Returning to the main research question, this research concluded that in general the fragmented model should be preferable to the consolidated model in most cases as it allows for greater diversity and information-flow. However, in cases in which close cooperation between two authorities is essential, the consolidated model should be used.
Resumo:
This thesis aims at investigating a new approach to document analysis based on the idea of structural patterns in XML vocabularies. My work is founded on the belief that authors do naturally converge to a reasonable use of markup languages and that extreme, yet valid instances are rare and limited. Actual documents, therefore, may be used to derive classes of elements (patterns) persisting across documents and distilling the conceptualization of the documents and their components, and may give ground for automatic tools and services that rely on no background information (such as schemas) at all. The central part of my work consists in introducing from the ground up a formal theory of eight structural patterns (with three sub-patterns) that are able to express the logical organization of any XML document, and verifying their identifiability in a number of different vocabularies. This model is characterized by and validated against three main dimensions: terseness (i.e. the ability to represent the structure of a document with a small number of objects and composition rules), coverage (i.e. the ability to capture any possible situation in any document) and expressiveness (i.e. the ability to make explicit the semantics of structures, relations and dependencies). An algorithm for the automatic recognition of structural patterns is then presented, together with an evaluation of the results of a test performed on a set of more than 1100 documents from eight very different vocabularies. This language-independent analysis confirms the ability of patterns to capture and summarize the guidelines used by the authors in their everyday practice. Finally, I present some systems that work directly on the pattern-based representation of documents. The ability of these tools to cover very different situations and contexts confirms the effectiveness of the model.
Resumo:
In the early 20th century, Gouy, Chapman, and Stern developed a theory to describe the capacitance and the spatial ion distribution of diluted electrolytes near an electrode. After a century of research, considerable progress has been made in the understanding of the electrolyte/electrode interface. However, its molecular-scale structure and its variation with an applied potential is still under debate. In particular for room-temperature ionic liquids, a new class of solventless electrolytes, the classical theories for the electrical double layer are not applicable. Recently, molecular dynamics simulations and phenomenological theories have attempted to explain the capacitance of the ionic liquid/electrode interface with the molecular-scale structure and dynamics of the ionic liquid near the electrode. rnHowever, experimental evidence is very limited. rnrnIn the presented study, the ion distribution of an ionic liquid near an electrode and its response to applied potentials was examined with sub-molecular resolution. For this purpose, a new sample chamber was constructed, allowing in situ high energy X-ray reflectivity experiments under potential control, as well as impedance spectroscopy measurements. The combination of structural information and electrochmical data provided a comprehensive picture of the electric double layer in ionic liquids. Oscillatory charge density profiles were found, consisting of alternating anion- and cation-enriched layers at both, cathodic and anodic, potentials. This structure was shown to arise from the same ion-ion correlations dominating the liquid bulk structure that were observed as a distinct X-ray diffraction peak. Therefore, existing physically motivated models were refined and verified by comparison with independent measurements. rnrnThe relaxation dynamics of the interfacial structure upon potential variation were studied by time resolved X-ray reflectivity experiments with sub-millisecond resolution. The observed relaxation times during charging/discharging are consistent with the impedance spectroscopy data revealing three processes of vastly different characteristic time-scales. Initially, the ion transport normal to the interface happens on a millisecond-scale. Another 100-millisecond-scale process is associated with molecular reorientation of electrode-adsorbed cations. Further, a minute-scale relaxation was observed, which is tentatively assigned to lateral ordering within the first layer.
Resumo:
Molecular dynamics simulations of silicate and borate glasses and melts: Structure, diffusion dynamics and vibrational properties. In this work computer simulations of the model glass formers SiO2 and B2O3 are presented, using the techniques of classical molecular dynamics (MD) simulations and quantum mechanical calculations, based on density functional theory (DFT). The latter limits the system size to about 100−200 atoms. SiO2 and B2O3 are the two most important network formers for industrial applications of oxide glasses. Glass samples are generated by means of a quench from the melt with classical MD simulations and a subsequent structural relaxation with DFT forces. In addition, full ab initio quenches are carried out with a significantly faster cooling rate. In principle, the structural properties are in good agreement with experimental results from neutron and X-ray scattering, in all cases. A special focus is on the study of vibrational properties, as they give access to low-temperature thermodynamic properties. The vibrational spectra are calculated by the so-called ”frozen phonon” method. In all cases, the DFT curves show an acceptable agreement with experimental results of inelastic neutron scattering. In case of the model glass former B2O3, a new classical interaction potential is parametrized, based on the liquid trajectory of an ab initio MD simulation at 2300 K. In this course, a structural fitting routine is used. The inclusion of 3-body angular interactions leads to a significantly improved agreement of the liquid properties of the classical MD and ab initio MD simulations. However, the generated glass structures, in all cases, show a significantly lower fraction of 3-membered planar boroxol rings as predicted by experimental results (f=60%-80%). The largest boroxol ring fraction of f=15±5% is observed in the full ab initio quenches from 2300 K. In case of SiO2, the glass structures after the quantum mechanical relaxation are the basis for calculations of the linear thermal expansion coefficient αL(T), employing the quasi-harmonic approximation. The striking observation is a change change of sign of αL(T) going along with a temperature range of negative αL(T) at low temperatures, which is in good agreement with experimental results.
Resumo:
Coarse graining is a popular technique used in physics to speed up the computer simulation of molecular fluids. An essential part of this technique is a method that solves the inverse problem of determining the interaction potential or its parameters from the given structural data. Due to discrepancies between model and reality, the potential is not unique, such that stability of such method and its convergence to a meaningful solution are issues.rnrnIn this work, we investigate empirically whether coarse graining can be improved by applying the theory of inverse problems from applied mathematics. In particular, we use the singular value analysis to reveal the weak interaction parameters, that have a negligible influence on the structure of the fluid and which cause non-uniqueness of the solution. Further, we apply a regularizing Levenberg-Marquardt method, which is stable against the mentioned discrepancies. Then, we compare it to the existing physical methods - the Iterative Boltzmann Inversion and the Inverse Monte Carlo method, which are fast and well adapted to the problem, but sometimes have convergence problems.rnrnFrom analysis of the Iterative Boltzmann Inversion, we elaborate a meaningful approximation of the structure and use it to derive a modification of the Levenberg-Marquardt method. We engage the latter for reconstruction of the interaction parameters from experimental data for liquid argon and nitrogen. We show that the modified method is stable, convergent and fast. Further, the singular value analysis of the structure and its approximation allows to determine the crucial interaction parameters, that is, to simplify the modeling of interactions. Therefore, our results build a rigorous bridge between the inverse problem from physics and the powerful solution tools from mathematics. rn
Resumo:
While empirical evidence continues to show that low socio-economic position is associated with less likely chances of being in good health, our understanding of why this is so remains less than clear. In this paper we examine the theoretical foundations for a structure-agency approach to the reduction of social inequalities in health. We use Max Weber's work on lifestyles to provide the explanation for the dualism between life chances (structure) and choice-based life conduct (agency). For explaining how the unequal distribution of material and non-material resources leads to the reproduction of unequal life chances and limitations of choice in contemporary societies, we apply Pierre Bourdieu's theory on capital interaction and habitus. We find, however, that Bourdieu's habitus concept is insufficient with regard to the role of agency for structural change and therefore does not readily provide for a theoretically supported move from sociological explanation to public health action. We therefore suggest Amartya Sen's capability approach as a useful link between capital interaction theory and action to reduce social inequalities in health. This link allows for the consideration of structural conditions as well as an active role for individuals as agents in reducing these inequalities. We suggest that people's capabilities to be active for their health be considered as a key concept in public health practice to reduce health inequalities. Examples provided from an ongoing health promotion project in Germany link our theoretical perspective to a practical experience.
Resumo:
A series of dicyanobiphenyl-cyclophanes 1-6 with various pi-backbone conformations and characteristic n-type semiconductor properties is presented. Their synthesis, optical, structural, electrochemical, spectroelectrochemical, and packing properties are investigated. The X-ray crystal structures of all n-type rods allow the systematic correlation of structural features with physical properties. In addition, the results are supported by quantum mechanical calculations based on density functional theory. A two-step reduction process is observed for all n-type rods, in which the first step is reversible. The potential gap between the reduction processes depends linearly on the cos(2) value of the torsion angle phi between the pi-systems. Similarly, optical absorption spectroscopy shows that the vertical excitation energy of the conjugation band correlates with the cos(2) value of the torsion angle phi. These correlations demonstrate that the fixed intramolecular torsion angle phi is the dominant factor determining the extent of electron delocalization in these model compounds, and that the angle phi measured in the solid-state structure is a good proxy for the molecular conformation in solution. Spectroelectrochemical investigations demonstrate that conformational rigidity is maintained even in the radical anion form. In particular, the absorption bands corresponding to the SOMO-LUMO+i transitions are shifted bathochromically, whereas the absorption bands corresponding to the HOMO-SOMO transition are shifted hypsochromically with increasing torsion angle phi.
Resumo:
Introduction: Advances in biotechnology have shed light on many biological processes. In biological networks, nodes are used to represent the function of individual entities within a system and have historically been studied in isolation. Network structure adds edges that enable communication between nodes. An emerging fieldis to combine node function and network structure to yield network function. One of the most complex networks known in biology is the neural network within the brain. Modeling neural function will require an understanding of networks, dynamics, andneurophysiology. It is with this work that modeling techniques will be developed to work at this complex intersection. Methods: Spatial game theory was developed by Nowak in the context of modeling evolutionary dynamics, or the way in which species evolve over time. Spatial game theory offers a two dimensional view of analyzingthe state of neighbors and updating based on the surroundings. Our work builds upon this foundation by studying evolutionary game theory networks with respect to neural networks. This novel concept is that neurons may adopt a particular strategy that will allow propagation of information. The strategy may therefore act as the mechanism for gating. Furthermore, the strategy of a neuron, as in a real brain, isimpacted by the strategy of its neighbors. The techniques of spatial game theory already established by Nowak are repeated to explain two basic cases and validate the implementation of code. Two novel modifications are introduced in Chapters 3 and 4 that build on this network and may reflect neural networks. Results: The introduction of two novel modifications, mutation and rewiring, in large parametricstudies resulted in dynamics that had an intermediate amount of nodes firing at any given time. Further, even small mutation rates result in different dynamics more representative of the ideal state hypothesized. Conclusions: In both modificationsto Nowak's model, the results demonstrate the network does not become locked into a particular global state of passing all information or blocking all information. It is hypothesized that normal brain function occurs within this intermediate range and that a number of diseases are the result of moving outside of this range.
Resumo:
In business literature, the conflicts among workers, shareholders and the management have been studied mostly in the frame of stakeholder theory. The stakeholder theory recognizes this issue as an agency problem, and tries to solve the problem by establishing a contractual relationship between the agent and principals. However, as Marcoux pointed out, the appropriateness of the contract as a medium to reduce the agency problem should be questioned. As an alternative, the cooperative model minimizes the agency costs by integrating the concept of workers, owners and management. Mondragon Corporation is a successful example of the cooperative model which grew into the sixth largest corporation in Spain. However, the cooperative model has long been ignored in discussions of corporate governance, mainly because the success of the cooperative model is extremely difficult to duplicate in reality. This thesis hopes to revitalize the scholarly examination of cooperatives by developing a new model that overcomes the fundamental problem in the cooperative model: the limited access to capital markets. By dividing the ownership interest into financial and control interest, the dual ownership structure allows cooperatives to issue stock in the capital market by making a financial product out of financial interest.
Resumo:
In many applications the observed data can be viewed as a censored high dimensional full data random variable X. By the curve of dimensionality it is typically not possible to construct estimators that are asymptotically efficient at every probability distribution in a semiparametric censored data model of such a high dimensional censored data structure. We provide a general method for construction of one-step estimators that are efficient at a chosen submodel of the full-data model, are still well behaved off this submodel and can be chosen to always improve on a given initial estimator. These one-step estimators rely on good estimators of the censoring mechanism and thus will require a parametric or semiparametric model for the censoring mechanism. We present a general theorem that provides a template for proving the desired asymptotic results. We illustrate the general one-step estimation methods by constructing locally efficient one-step estimators of marginal distributions and regression parameters with right-censored data, current status data and bivariate right-censored data, in all models allowing the presence of time-dependent covariates. The conditions of the asymptotics theorem are rigorously verified in one of the examples and the key condition of the general theorem is verified for all examples.
Resumo:
Graphene is one of the most important materials. In this research, the structures and properties of graphene nano disks (GND) with a concentric shape were investigated by Density Functional Theory (DFT) calculations, in which the most effective DFT methods - B3lyp and Pw91pw91 were employed. It was found that there are two types of edges - Zigzag and Armchair in concentric graphene nano disks (GND). The bond length between armchair-edge carbons is much shorter than that between zigzag-edge carbons. For C24 GND that consists of 24 carbon atoms, only armchair edge with 12 atoms is formed. For a GND larger than the C24 GND, both armchair and zigzag edges co-exist. Furthermore, when the number of carbon atoms in armchair-edge are always 12, the number of zigzag-edge atoms increases with increasing the size of a GND. In addition, the stability of a GND is enhanced with increasing its size, because the ratio of edge-atoms to non-edge-atoms decreases. The size effect of a graphene nano disk on its HOMO-LUMO energy gap was evaluated. C6 and C24 GNDs possess HOMO-LUMO gaps of 1.7 and 2.1eV, respectively, indicating that they are semi-conductors. In contrast, C54 and C96 GNDs are organic metals, because their HOMO-LUMO gaps are as low as 0.3 eV. The effect of doping foreign atoms to the edges of GNDs on their structures, stabilities, and HOMO-LUMO energy gaps were also examined. When foreign atoms are attached to the edge of a GND, the original unsaturated carbon atoms become saturated. As a result, both of the C-C bonds lengths and the stability of a GND increase. Furthermore, the doping effect on the HOMO-LUMO energy gap is dependent on the type of doped atoms. The doping H, F, or OH into the edge of a GND increases its HOMO-LUMO energy gap. In contrast, a Li-doped GND has a lower HOMO-LUMO energy gap than that without doping. Therefore, Li-doping can increase the electrical conductance of a GND, whereas H, F, or OH-doping decreases its conductance.
Resumo:
Reducing the uncertainties related to blade dynamics by the improvement of the quality of numerical simulations of the fluid structure interaction process is a key for a breakthrough in wind-turbine technology. A fundamental step in that direction is the implementation of aeroelastic models capable of capturing the complex features of innovative prototype blades, so they can be tested at realistic full-scale conditions with a reasonable computational cost. We make use of a code based on a combination of two advanced numerical models implemented in a parallel HPC supercomputer platform: First, a model of the structural response of heterogeneous composite blades, based on a variation of the dimensional reduction technique proposed by Hodges and Yu. This technique has the capacity of reducing the geometrical complexity of the blade section into a stiffness matrix for an equivalent beam. The reduced 1-D strain energy is equivalent to the actual 3-D strain energy in an asymptotic sense, allowing accurate modeling of the blade structure as a 1-D finite-element problem. This substantially reduces the computational effort required to model the structural dynamics at each time step. Second, a novel aerodynamic model based on an advanced implementation of the BEM(Blade ElementMomentum) Theory; where all velocities and forces are re-projected through orthogonal matrices into the instantaneous deformed configuration to fully include the effects of large displacements and rotation of the airfoil sections into the computation of aerodynamic forces. This allows the aerodynamic model to take into account the effects of the complex flexo-torsional deformation that can be captured by the more sophisticated structural model mentioned above. In this thesis we have successfully developed a powerful computational tool for the aeroelastic analysis of wind-turbine blades. Due to the particular features mentioned above in terms of a full representation of the combined modes of deformation of the blade as a complex structural part and their effects on the aerodynamic loads, it constitutes a substantial advancement ahead the state-of-the-art aeroelastic models currently available, like the FAST-Aerodyn suite. In this thesis, we also include the results of several experiments on the NREL-5MW blade, which is widely accepted today as a benchmark blade, together with some modifications intended to explore the capacities of the new code in terms of capturing features on blade-dynamic behavior, which are normally overlooked by the existing aeroelastic models.
Resumo:
This technical report discusses the application of Lattice Boltzmann Method (LBM) in the fluid flow simulation through porous filter-wall of disordered media. The diesel particulate filter (DPF) is an example of disordered media. DPF is developed as a cutting edge technology to reduce harmful particulate matter in the engine exhaust. Porous filter-wall of DPF traps these soot particles in the after-treatment of the exhaust gas. To examine the phenomena inside the DPF, researchers are looking forward to use the Lattice Boltzmann Method as a promising alternative simulation tool. The lattice Boltzmann method is comparatively a newer numerical scheme and can be used to simulate fluid flow for single-component single-phase, single-component multi-phase. It is also an excellent method for modelling flow through disordered media. The current work focuses on a single-phase fluid flow simulation inside the porous micro-structure using LBM. Firstly, the theory concerning the development of LBM is discussed. LBM evolution is always related to Lattice gas Cellular Automata (LGCA), but it is also shown that this method is a special discretized form of the continuous Boltzmann equation. Since all the simulations are conducted in two-dimensions, the equations developed are in reference with D2Q9 (two-dimensional 9-velocity) model. The artificially created porous micro-structure is used in this study. The flow simulations are conducted by considering air and CO2 gas as fluids. The numerical model used in this study is explained with a flowchart and the coding steps. The numerical code is constructed in MATLAB. Different types of boundary conditions and their importance is discussed separately. Also the equations specific to boundary conditions are derived. The pressure and velocity contours over the porous domain are studied and recorded. The results are compared with the published work. The permeability values obtained in this study can be fitted to the relation proposed by Nabovati [8], and the results are in excellent agreement within porosity range of 0.4 to 0.8.