934 resultados para string breaking
Resumo:
Despite the central role of the media in contemporary society, studies examining the rhetorical practices of journalists are rare in organization and management research. We know little of the textual micro strategies and techniques through which journalists convey specific messages to their readers. Partially to fill the gap, this paper outlines a methodological framework that combines three perspectives of text analysis and interpretation: critical discourse analysis, systemic functional grammar and rhetorical structure theory. Using this framework, we engage in a close reading of a single media text (a press article) on a recent case of industrial restructuring in the financial services. In our empirical analysis, we focus on key arguments put forward by the journalists’ rhetorical constructions. We maintain that these arguments—which are not frame-breaking but rather tend to confirm existing presuppositions held by the audience—are an essential part of the legitimization and naturalization of specific management ideas and ideologies.
Resumo:
Modeling and forecasting of implied volatility (IV) is important to both practitioners and academics, especially in trading, pricing, hedging, and risk management activities, all of which require an accurate volatility. However, it has become challenging since the 1987 stock market crash, as implied volatilities (IVs) recovered from stock index options present two patterns: volatility smirk(skew) and volatility term-structure, if the two are examined at the same time, presents a rich implied volatility surface (IVS). This implies that the assumptions behind the Black-Scholes (1973) model do not hold empirically, as asset prices are mostly influenced by many underlying risk factors. This thesis, consists of four essays, is modeling and forecasting implied volatility in the presence of options markets’ empirical regularities. The first essay is modeling the dynamics IVS, it extends the Dumas, Fleming and Whaley (DFW) (1998) framework; for instance, using moneyness in the implied forward price and OTM put-call options on the FTSE100 index, a nonlinear optimization is used to estimate different models and thereby produce rich, smooth IVSs. Here, the constant-volatility model fails to explain the variations in the rich IVS. Next, it is found that three factors can explain about 69-88% of the variance in the IVS. Of this, on average, 56% is explained by the level factor, 15% by the term-structure factor, and the additional 7% by the jump-fear factor. The second essay proposes a quantile regression model for modeling contemporaneous asymmetric return-volatility relationship, which is the generalization of Hibbert et al. (2008) model. The results show strong negative asymmetric return-volatility relationship at various quantiles of IV distributions, it is monotonically increasing when moving from the median quantile to the uppermost quantile (i.e., 95%); therefore, OLS underestimates this relationship at upper quantiles. Additionally, the asymmetric relationship is more pronounced with the smirk (skew) adjusted volatility index measure in comparison to the old volatility index measure. Nonetheless, the volatility indices are ranked in terms of asymmetric volatility as follows: VIX, VSTOXX, VDAX, and VXN. The third essay examines the information content of the new-VDAX volatility index to forecast daily Value-at-Risk (VaR) estimates and compares its VaR forecasts with the forecasts of the Filtered Historical Simulation and RiskMetrics. All daily VaR models are then backtested from 1992-2009 using unconditional, independence, conditional coverage, and quadratic-score tests. It is found that the VDAX subsumes almost all information required for the volatility of daily VaR forecasts for a portfolio of the DAX30 index; implied-VaR models outperform all other VaR models. The fourth essay models the risk factors driving the swaption IVs. It is found that three factors can explain 94-97% of the variation in each of the EUR, USD, and GBP swaption IVs. There are significant linkages across factors, and bi-directional causality is at work between the factors implied by EUR and USD swaption IVs. Furthermore, the factors implied by EUR and USD IVs respond to each others’ shocks; however, surprisingly, GBP does not affect them. Second, the string market model calibration results show it can efficiently reproduce (or forecast) the volatility surface for each of the swaptions markets.
Resumo:
Knowledge Flow, my dear friend! I would like to introduce you to a close relative of yours: Organizational Communication. You might want to take a moment to hear what your newfound kin has to say. As bright as you are dear Flow, you're missing a piece of the puzzle - for one cannot study any aspect of an organization relating to communication without acknowledging the message. Without a message, communication does not exist. Organizational Communication has always appreciated this. Perhaps the time has come for you to join rank and do so too? The main point of this work is to prove that the form of a message considerably affects communication, interpretation - and knowledge flow. As stories are at the heart of this thesis; and entertaining, reader-friendly communication its main argument, the entire manuscript is written in story form and is intentionally breaking academic writing tradition as far as writing style goes. Each chapter reads as a story of sorts and put together they create a grand narrative of my journey as a PhD student, the research I have conducted and the outcomes of this work. Thus if a reader hopes to make any sense of this title, she must read it in the same way one would read a novel, from beginning to end. This is a thesis with three aspirations. First, it sets out to prove that knowledge flow cannot be studied without a message. Second, it moves on to give the reader a once-over of a much used message form: storytelling. After these two goals are tackled the path is clear to research if message form indeed is as essential as claimed. I do so through both a qualitative and a quantitative study. The former acted as both a stepping stone into the research area and as an inspirational pilot, from which the research design for the larger quantitative study was drawn. Together, these two studies answered my research question - and allowed me to fulfill the third, final and foremost aspiration of this study - bridging the gap between two separate fields of knowledge management: knowledge flow and storytelling.
Resumo:
Encoding protein 3D structures into 1D string using short structural prototypes or structural alphabets opens a new front for structure comparison and analysis. Using the well-documented 16 motifs of Protein Blocks (PBs) as structural alphabet, we have developed a methodology to compare protein structures that are encoded as sequences of PBs by aligning them using dynamic programming which uses a substitution matrix for PBs. This methodology is implemented in the applications available in Protein Block Expert (PBE) server. PBE addresses common issues in the field of protein structure analysis such as comparison of proteins structures and identification of protein structures in structural databanks that resemble a given structure. PBE-T provides facility to transform any PDB file into sequences of PBs. PBE-ALIGNc performs comparison of two protein structures based on the alignment of their corresponding PB sequences. PBE-ALIGNm is a facility for mining SCOP database for similar structures based on the alignment of PBs. Besides, PBE provides an interface to a database (PBE-SAdb) of preprocessed PB sequences from SCOP culled at 95% and of all-against-all pairwise PB alignments at family and superfamily levels. PBE server is freely available at http://bioinformatics.univ-reunion.fr/ PBE/.
Resumo:
Investigations of different superconducting (S)/ferromagnetic (F) heterostructures grown by pulsed laser deposition reveal that the activation energy (U) for the vortex motion in a high T-c superconductor is reduced remarkably by the presence of F layers. The U exhibits a logarithmic dependence on the applied magnetic field in the S/F bilayers suggesting the existence of decoupled two-dimensional (2D) pancake vortices. This result is discussed in terms of the reduction in the effective S layer thickness and the weakening of the S coherence length due to the presence of F layers. In addition, the U and the superconducting T-c in YBa2Cu3O7-delta/La0.5Sr0.5CoO3 bilayers are observed to be much lower than in the YBa2Cu3O7-delta/La0.7Sr0.3MnO3 ones. This in turn suggests that the degree of spin polarization of the F layer might not play a crucial role for the suppression of superconductivity due to a spin polarized induced pair-breaking effect in S/F bilayers.
Resumo:
A few simple three-atom thermoneutral radical exchange reactions (i.e. A + BC --> AB + C) are examined by ab initio SCF methods. Emphasis is laid on the detailed analysis of density matrices rather than on energetics. Results reveal that the sum of the bond orders of the breaking and forming bonds is not conserved to unity, due to development of free valence on the migrating atom 'B' in the transition state. Bond orders, free valence and spin densities on the atoms are calculated. The present analysis shows that the bond-cleavage process is always more advanced than the bond-formation process in the transition state. Further analysis shows a development of the negative spin density on the migrating atom 'B' in the transition state. The depletion of the alpha-spin density on the radical site "A" in the reactant during the reaction lags behind the growth of the alpha-spin density on the terminal atom "C" of the reactant bond, 'B-C' in the transition state. But all these processes are completed simultaneously at the end of the reaction. Hence, the reactions are asynchronous but kinetically concerted in most cases.
Resumo:
Alternating differential scanning calorimetry (ADSC) studies were undertaken to investigate the effect of Tl addition on the thermal properties of As30Te70-xTlx ( 6 <= x <= 22 at%) glasses. These include parameters such as glass-transition temperature (T-g), changes in specific heat capacity (Delta C-p) and relaxation enthalpy (Delta H-NR) at the glass transition. It was found that T-g of the glasses decreased with the addition of Tl, which is in contrast to the dependence of T-g in As - Te glasses on the addition of Al and In. The change in heat capacity Delta C-p through the glass transition was also found to decrease with increasing Tl content. The addition of Tl to the As - Te matrix may lead to a breaking of As - Te chains and the formation of Tl+Te- AsTe2/2 dipoles. There was no significant dependence of the change of relaxation enthalpy, through the glass transition, with composition.
Resumo:
The alloy, Ti-6Al-4V is an alpha + beta Ti alloy that has large prior beta grain size (similar to 2 mm) in the as cast state. Minor addition of B (about 0.1 wt.%) to it refines the grain size significantly as well as produces in-situ TiB needles. The role played by these microstructural modifications on high temperature deformation processing maps of B-modified Ti64 alloys is examined in this paper.Power dissipation efficiency and instability maps have been generated within the temperature range of 750-1000 degrees C and strain rate range of 10(-3)-10(+1) s(-1). Various deformation mechanisms, which operate in different temperature-strain rate regimes, were identified with the aid of the maps and complementary microstructural analysis of the deformed specimens. Results indicate four distinct deformation domains within the range of experimental conditions examined, with the combination of 900-1000 degrees C and 10(-3)-10(-2) s(-1) being the optimum for hot working. In that zone, dynamic globularization of alpha laths is the principle deformation mechanism. The marked reduction in the prior beta grain size, achieved with the addition of B, does not appear to alter this domain markedly. The other domains, with negative values of instability parameter, show undesirable microstructural features such as extensive kinking/bending of alpha laths and breaking of beta laths for Ti64-0.0B as well as generation of voids and cracks in the matrix and TiB needles in the B-modified alloys. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The problem of controlling the vibration pattern of a driven string is considered. The basic question dealt with here is to find the control forces which reduce the energy of vibration of a driven string over a prescribed portion of its length while maintaining the energy outside that length above a desired value. The criterion of keeping the response outside the region of energy reduction as close to the original response as possible is introduced as an additional constraint. The slack unconstrained minimization technique (SLUMT) has been successfully applied to solve the above problem. The effect of varying the phase of the control forces (which results in a six-variable control problem) is then studied. The nonlinear programming techniques which have been effectively used to handle problems involving many variables and constraints therefore offer a powerful tool for the solution of vibration control problems.
Resumo:
The nonminimal coupling of a massive self-interacting scalar field with a gravitational field is studied. Spontaneous symmetry breaking occurs in the open universe even when the sign on the mass term is positive. In contrast to grand unified theories, symmetry breakdown is more important for the early universe and it is restored only in the limit of an infinite expansion. Symmetry breakdown is shown to occur in flat and closed universes when the mass term carries a wrong sign. The model has a naturally defined effective gravitational coupling coefficient which is rendered time-dependent due to the novel symmetry breakdown. It changes sign below a critical value of the cosmic scale factor indicating the onset of a repulsive field. The presence of the mass term severely alters the behaviour of ordinary matter and radiation in the early universe. The total energy density becomes negative in a certain domain. These features make possible a nonsingular cosm
Resumo:
Thiourea (CS(NH2)2) is one of the few examples of molecular crystals exhibiting ferroelectric properties. The dielectric constant along the ferroelectric axis [100] shows maxima at 169, 177 and 202 K. An inflection point occurs at 170.5 KZ Following Goldsmith and White the phases are named as I (F.E. below 169 K), II (A.F.E. 169 K
Resumo:
The numerical values of gA are evaluated using quantum-chromodynamic sum rules. The nuclear medium effects are taken into account by modifying the chiral symmetry breaking correlation, . Our results indicate a quenching of gA in a nuclear medium. The physical reasons for this fundamental quenching are noted to be the same for the effective mass of the nucleon bound in a nucleus being less than its free space value.
Resumo:
Floating in the air that surrounds us is a number of small particles, invisible to the human eye. The mixture of air and particles, liquid or solid, is called an aerosol. Aerosols have significant effects on air quality, visibility and health, and on the Earth's climate. Their effect on the Earth's climate is the least understood of climatically relevant effects. They can scatter the incoming radiation from the Sun, or they can act as seeds onto which cloud droplets are formed. Aerosol particles are created directly, by human activity or natural reasons such as breaking ocean waves or sandstorms. They can also be created indirectly as vapors or very small particles are emitted into the atmosphere and they combine to form small particles that later grow to reach climatically or health relevant sizes. The mechanisms through which those particles are formed is still under scientific discussion, even though this knowledge is crucial to make air quality or climate predictions, or to understand how aerosols will influence and will be influenced by the climate's feedback loops. One of the proposed mechanisms responsible for new particle formation is ion-induced nucleation. This mechanism is based on the idea that newly formed particles were ultimately formed around an electric charge. The amount of available charges in the atmosphere varies depending on radon concentrations in the soil and in the air, as well as incoming ionizing radiation from outer space. In this thesis, ion-induced nucleation is investigated through long-term measurements in two different environments: in the background site of Hyytiälä and in the urban site that is Helsinki. The main conclusion of this thesis is that ion-induced nucleation generally plays a minor role in new particle formation. The fraction of particles formed varies from day to day and from place to place. The relative importance of ion-induced nucleation, i.e. the fraction of particles formed through ion-induced nucleation, is bigger in cleaner areas where the absolute number of particles formed is smaller. Moreover, ion-induced nucleation contributes to a bigger fraction of particles on warmer days, when the sulfuric acid and water vapor saturation ratios are lower. This analysis will help to understand the feedbacks associated with climate change.
Resumo:
A distributed system is a collection of networked autonomous processing units which must work in a cooperative manner. Currently, large-scale distributed systems, such as various telecommunication and computer networks, are abundant and used in a multitude of tasks. The field of distributed computing studies what can be computed efficiently in such systems. Distributed systems are usually modelled as graphs where nodes represent the processors and edges denote communication links between processors. This thesis concentrates on the computational complexity of the distributed graph colouring problem. The objective of the graph colouring problem is to assign a colour to each node in such a way that no two nodes connected by an edge share the same colour. In particular, it is often desirable to use only a small number of colours. This task is a fundamental symmetry-breaking primitive in various distributed algorithms. A graph that has been coloured in this manner using at most k different colours is said to be k-coloured. This work examines the synchronous message-passing model of distributed computation: every node runs the same algorithm, and the system operates in discrete synchronous communication rounds. During each round, a node can communicate with its neighbours and perform local computation. In this model, the time complexity of a problem is the number of synchronous communication rounds required to solve the problem. It is known that 3-colouring any k-coloured directed cycle requires at least ½(log* k - 3) communication rounds and is possible in ½(log* k + 7) communication rounds for all k ≥ 3. This work shows that for any k ≥ 3, colouring a k-coloured directed cycle with at most three colours is possible in ½(log* k + 3) rounds. In contrast, it is also shown that for some values of k, colouring a directed cycle with at most three colours requires at least ½(log* k + 1) communication rounds. Furthermore, in the case of directed rooted trees, reducing a k-colouring into a 3-colouring requires at least log* k + 1 rounds for some k and possible in log* k + 3 rounds for all k ≥ 3. The new positive and negative results are derived using computational methods, as the existence of distributed colouring algorithms corresponds to the colourability of so-called neighbourhood graphs. The colourability of these graphs is analysed using Boolean satisfiability (SAT) solvers. Finally, this thesis shows that similar methods are applicable in capturing the existence of distributed algorithms for other graph problems, such as the maximal matching problem.
Resumo:
Physics at the Large Hadron Collider (LHC) and the International e(+)e(-) Linear Collider (ILC) will be complementary in many respects, as has been demonstrated at previous generations of hadron and lepton colliders. This report addresses the possible interplay between the LHC and ILC in testing the Standard Model and in discovering and determining the origin of new physics. Mutual benefits for the physics programme at both machines can occur both at the level of a combined interpretation of Hadron Collider and Linear Collider data and at the level of combined analyses of the data, where results obtained at one machine can directly influence the way analyses are carried out at the other machine. Topics under study comprise the physics of weak and strong electroweak symmetry breaking, supersymmetric models, new gauge theories, models with extra dimensions, and electroweak and QCD precision physics. The status of the work that has been carried out within the LHC/ILC Study Group so far is summarized in this report. Possible topics for future studies are outlined.