998 resultados para FORMALISM


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die Untersuchung von dissipativen Quantensystemen erm¨oglicht es, Quantenph¨anomene auch auf makroskopischen L¨angenskalen zu beobachten. Das in dieser Dissertation gew¨ahlte mikroskopische Modell erlaubt es, den bisher nur ph¨anomenologisch zug¨anglichen Effekt der Quantendissipation mathematisch und physikalisch herzuleiten und zu untersuchen. Bei dem betrachteten mikroskopischen Modell handelt es sich um eine 1-dimensionale Kette von harmonischen Freiheitsgraden, die sowohl untereinander als auch an r anharmonische Freiheitsgrade gekoppelt sind. Die F¨alle einer, respektive zwei anharmonischer Bindungen werden in dieser Arbeit explizit betrachtet. Hierf¨ur wird eine analytische Trennung der harmonischen von den anharmonischen Freiheitsgraden auf zwei verschiedenen Wegen durchgef¨uhrt. Das anharmonische Potential wird als symmetrisches Doppelmuldenpotential gew¨ahlt, welches mit Hilfe der Wick Rotation die Berechnung der ¨Uberg¨ange zwischen beiden Minima erlaubt. Das Eliminieren der harmonischen Freiheitsgrade erfolgt mit Hilfe des wohlbekannten Feynman-Vernon Pfadintegral-Formalismus [21]. In dieser Arbeit wird zuerst die Positionsabh¨angigkeit einer anharmonischen Bindung im Tunnelverhalten untersucht. F¨ur den Fall einer fernab von den R¨andern lokalisierten anharmonischen Bindung wird ein Ohmsches dissipatives Tunneln gefunden, was bei der Temperatur T = 0 zu einem Phasen¨ubergang in Abh¨angigkeit einer kritischen Kopplungskonstanten Ccrit f¨uhrt. Dieser Phasen¨ubergang wurde bereits in rein ph¨anomenologisches Modellen mit Ohmscher Dissipation durch das Abbilden des Systems auf das Ising-Modell [26] erkl¨art. Wenn die anharmonische Bindung jedoch an einem der R¨ander der makroskopisch grossen Kette liegt, tritt nach einer vom Abstand der beiden anharmonischen Bindungen abh¨angigen Zeit tD ein ¨Ubergang von Ohmscher zu super- Ohmscher Dissipation auf, welche im Kern KM(τ ) klar sichtbar ist. F¨ur zwei anharmonische Bindungen spielt deren indirekteWechselwirkung eine entscheidende Rolle. Es wird gezeigt, dass der Abstand D beider Bindungen und die Wahl des Anfangs- und Endzustandes die Dissipation bestimmt. Unter der Annahme, dass beide anharmonischen Bindung gleichzeitig tunneln, wird eine Tunnelwahrscheinlichkeit p(t) analog zu [14], jedoch f¨ur zwei anharmonische Bindungen, berechnet. Als Resultat erhalten wir entweder Ohmsche Dissipation f¨ur den Fall, dass beide anharmonischen Bindungen ihre Gesamtl¨ange ¨andern, oder super-Ohmsche Dissipation, wenn beide anharmonischen Bindungen durch das Tunneln ihre Gesamtl¨ange nicht ¨andern.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the present thesis, we study quantization of classical systems with non-trivial phase spaces using the group-theoretical quantization technique proposed by Isham. Our main goal is a better understanding of global and topological aspects of quantum theory. In practice, the group-theoretical approach enables direct quantization of systems subject to constraints and boundary conditions in a natural and physically transparent manner -- cases for which the canonical quantization method of Dirac fails. First, we provide a clarification of the quantization formalism. In contrast to prior treatments, we introduce a sharp distinction between the two group structures that are involved and explain their physical meaning. The benefit is a consistent and conceptually much clearer construction of the Canonical Group. In particular, we shed light upon the 'pathological' case for which the Canonical Group must be defined via a central Lie algebra extension and emphasise the role of the central extension in general. In addition, we study direct quantization of a particle restricted to a half-line with 'hard wall' boundary condition. Despite the apparent simplicity of this example, we show that a naive quantization attempt based on the cotangent bundle over the half-line as classical phase space leads to an incomplete quantum theory; the reflection which is a characteristic aspect of the 'hard wall' is not reproduced. Instead, we propose a different phase space that realises the necessary boundary condition as a topological feature and demonstrate that quantization yields a suitable quantum theory for the half-line model. The insights gained in the present special case improve our understanding of the relation between classical and quantum theory and illustrate how contact interactions may be incorporated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work I reported recent results in the field of Statistical Mechanics of Equilibrium, and in particular in Spin Glass models and Monomer Dimer models . We start giving the mathematical background and the general formalism for Spin (Disordered) Models with some of their applications to physical and mathematical problems. Next we move on general aspects of the theory of spin glasses, in particular to the Sherrington-Kirkpatrick model which is of fundamental interest for the work. In Chapter 3, we introduce the Multi-species Sherrington-Kirkpatrick model (MSK), we prove the existence of the thermodynamical limit and the Guerra's Bound for the quenched pressure together with a detailed analysis of the annealed and the replica symmetric regime. The result is a multidimensional generalization of the Parisi's theory. Finally we brie y illustrate the strategy of the Panchenko's proof of the lower bound. In Chapter 4 we discuss the Aizenmann-Contucci and the Ghirlanda-Guerra identities for a wide class of Spin Glass models. As an example of application, we discuss the role of these identities in the proof of the lower bound. In Chapter 5 we introduce the basic mathematical formalism of Monomer Dimer models. We introduce a Gaussian representation of the partition function that will be fundamental in the rest of the work. In Chapter 6, we introduce an interacting Monomer-Dimer model. Its exact solution is derived and a detailed study of its analytical properties and related physical quantities is performed. In Chapter 7, we introduce a quenched randomness in the Monomer Dimer model and show that, under suitable conditions the pressure is a self averaging quantity. The main result is that, if we consider randomness only in the monomer activity, the model is exactly solvable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Among the different approaches for a construction of a fundamental quantum theory of gravity the Asymptotic Safety scenario conjectures that quantum gravity can be defined within the framework of conventional quantum field theory, but only non-perturbatively. In this case its high energy behavior is controlled by a non-Gaussian fixed point of the renormalization group flow, such that its infinite cutoff limit can be taken in a well defined way. A theory of this kind is referred to as non-perturbatively renormalizable. In the last decade a considerable amount of evidence has been collected that in four dimensional metric gravity such a fixed point, suitable for the Asymptotic Safety construction, indeed exists. This thesis extends the Asymptotic Safety program of quantum gravity by three independent studies that differ in the fundamental field variables the investigated quantum theory is based on, but all exhibit a gauge group of equivalent semi-direct product structure. It allows for the first time for a direct comparison of three asymptotically safe theories of gravity constructed from different field variables. The first study investigates metric gravity coupled to SU(N) Yang-Mills theory. In particular the gravitational effects to the running of the gauge coupling are analyzed and its implications for QED and the Standard Model are discussed. The second analysis amounts to the first investigation on an asymptotically safe theory of gravity in a pure tetrad formulation. Its renormalization group flow is compared to the corresponding approximation of the metric theory and the influence of its enlarged gauge group on the UV behavior of the theory is analyzed. The third study explores Asymptotic Safety of gravity in the Einstein-Cartan setting. Here, besides the tetrad, the spin connection is considered a second fundamental field. The larger number of independent field components and the enlarged gauge group render any RG analysis of this system much more difficult than the analog metric analysis. In order to reduce the complexity of this task a novel functional renormalization group equation is proposed, that allows for an evaluation of the flow in a purely algebraic manner. As a first example of its suitability it is applied to a three dimensional truncation of the form of the Holst action, with the Newton constant, the cosmological constant and the Immirzi parameter as its running couplings. A detailed comparison of the resulting renormalization group flow to a previous study of the same system demonstrates the reliability of the new equation and suggests its use for future studies of extended truncations in this framework.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work considers the reconstruction of strong gravitational lenses from their observed effects on the light distribution of background sources. After reviewing the formalism of gravitational lensing and the most common and relevant lens models, new analytical results on the elliptical power law lens are presented, including new expressions for the deflection, potential, shear and magnification, which naturally lead to a fast numerical scheme for practical calculation. The main part of the thesis investigates lens reconstruction with extended sources by means of the forward reconstruction method, in which the lenses and sources are given by parametric models. The numerical realities of the problem make it necessary to find targeted optimisations for the forward method, in order to make it feasible for general applications to modern, high resolution images. The result of these optimisations is presented in the \textsc{Lensed} algorithm. Subsequently, a number of tests for general forward reconstruction methods are created to decouple the influence of sourced from lens reconstructions, in order to objectively demonstrate the constraining power of the reconstruction. The final chapters on lens reconstruction contain two sample applications of the forward method. One is the analysis of images from a strong lensing survey. Such surveys today contain $\sim 100$ strong lenses, and much larger sample sizes are expected in the future, making it necessary to quickly and reliably analyse catalogues of lenses with a fixed model. The second application deals with the opposite situation of a single observation that is to be confronted with different lens models, where the forward method allows for natural model-building. This is demonstrated using an example reconstruction of the ``Cosmic Horseshoe''. An appendix presents an independent work on the use of weak gravitational lensing to investigate theories of modified gravity which exhibit screening in the non-linear regime of structure formation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die causa finalis der vorliegenden Arbeit ist das Verständnis des Phasendiagramms von Wasserstoff bei ultrahohen Drücken, welche von nichtleitendem H2 bis hin zu metallischem H reichen. Da die Voraussetzungen für ultrahohen Druck im Labor schwer zu schaffen sind, bilden Computersimulationen ein wichtiges alternatives Untersuchungsinstrument. Allerdings sind solche Berechnungen eine große Herausforderung. Eines der größten Probleme ist die genaue Auswertung des Born-Oppenheimer Potentials, welches sowohl für die nichtleitende als auch für die metallische Phase geeignet sein muss. Außerdem muss es die starken Korrelationen berücksichtigen, die durch die kovalenten H2 Bindungen und die eventuellen Phasenübergänge hervorgerufen werden. Auf dieses Problem haben unsere Anstrengungen abgezielt. Im Kontext von Variationellem Monte Carlo (VMC) ist die Shadow Wave Function (SWF) eine sehr vielversprechende Option. Aufgrund ihrer Flexibilität sowohl lokalisierte als auch delokalisierte Systeme zu beschreiben sowie ihrer Fähigkeit Korrelationen hoher Ordnung zu berücksichtigen, ist sie ein idealer Kandidat für unsere Zwecke. Unglücklicherweise bringt ihre Formulierung ein Vorzeichenproblem mit sich, was die Anwendbarkeit limitiert. Nichtsdestotrotz ist es möglich diese Schwierigkeit zu umgehen indem man die Knotenstruktur a priori festlegt. Durch diesen Formalismus waren wir in der Lage die Beschreibung der Elektronenstruktur von Wasserstoff signifikant zu verbessern, was eine sehr vielversprechende Perspektive bietet. Während dieser Forschung haben wir also die Natur des Vorzeichenproblems untersucht, das sich auf die SWF auswirkt, und dabei ein tieferes Verständnis seines Ursprungs erlangt. Die vorliegende Arbeit ist in vier Kapitel unterteilt. Das erste Kapitel führt VMC und die SWF mit besonderer Ausrichtung auf fermionische Systeme ein. Kapitel 2 skizziert die Literatur über das Phasendiagramm von Wasserstoff bei ultrahohem Druck. Das dritte Kapitel präsentiert die Implementierungen unseres VMC Programms und die erhaltenen Ergebnisse. Zum Abschluss fasst Kapitel 4 unsere Bestrebungen zur Lösung des zur SWF zugehörigen Vorzeichenproblems zusammen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die vorliegende Arbeit behandelt die Entwicklung und Verbesserung von linear skalierenden Algorithmen für Elektronenstruktur basierte Molekulardynamik. Molekulardynamik ist eine Methode zur Computersimulation des komplexen Zusammenspiels zwischen Atomen und Molekülen bei endlicher Temperatur. Ein entscheidender Vorteil dieser Methode ist ihre hohe Genauigkeit und Vorhersagekraft. Allerdings verhindert der Rechenaufwand, welcher grundsätzlich kubisch mit der Anzahl der Atome skaliert, die Anwendung auf große Systeme und lange Zeitskalen. Ausgehend von einem neuen Formalismus, basierend auf dem großkanonischen Potential und einer Faktorisierung der Dichtematrix, wird die Diagonalisierung der entsprechenden Hamiltonmatrix vermieden. Dieser nutzt aus, dass die Hamilton- und die Dichtematrix aufgrund von Lokalisierung dünn besetzt sind. Das reduziert den Rechenaufwand so, dass er linear mit der Systemgröße skaliert. Um seine Effizienz zu demonstrieren, wird der daraus entstehende Algorithmus auf ein System mit flüssigem Methan angewandt, das extremem Druck (etwa 100 GPa) und extremer Temperatur (2000 - 8000 K) ausgesetzt ist. In der Simulation dissoziiert Methan bei Temperaturen oberhalb von 4000 K. Die Bildung von sp²-gebundenem polymerischen Kohlenstoff wird beobachtet. Die Simulationen liefern keinen Hinweis auf die Entstehung von Diamant und wirken sich daher auf die bisherigen Planetenmodelle von Neptun und Uranus aus. Da das Umgehen der Diagonalisierung der Hamiltonmatrix die Inversion von Matrizen mit sich bringt, wird zusätzlich das Problem behandelt, eine (inverse) p-te Wurzel einer gegebenen Matrix zu berechnen. Dies resultiert in einer neuen Formel für symmetrisch positiv definite Matrizen. Sie verallgemeinert die Newton-Schulz Iteration, Altmans Formel für beschränkte und nicht singuläre Operatoren und Newtons Methode zur Berechnung von Nullstellen von Funktionen. Der Nachweis wird erbracht, dass die Konvergenzordnung immer mindestens quadratisch ist und adaptives Anpassen eines Parameters q in allen Fällen zu besseren Ergebnissen führt.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The primary goal of this work is related to the extension of an analytic electro-optical model. It will be used to describe single-junction crystalline silicon solar cells and a silicon/perovskite tandem solar cell in the presence of light-trapping in order to calculate efficiency limits for such a device. In particular, our tandem system is composed by crystalline silicon and a perovskite structure material: metilammoniumleadtriiodide (MALI). Perovskite are among the most convenient materials for photovoltaics thanks to their reduced cost and increasing efficiencies. Solar cell efficiencies of devices using these materials increased from 3.8% in 2009 to a certified 20.1% in 2014 making this the fastest-advancing solar technology to date. Moreover, texturization increases the amount of light which can be absorbed through an active layer. Using Green’s formalism it is possible to calculate the photogeneration rate of a single-layer structure with Lambertian light trapping analytically. In this work we go further: we study the optical coupling between the two cells in our tandem system in order to calculate the photogeneration rate of the whole structure. We also model the electronic part of such a device by considering the perovskite top cell as an ideal diode and solving the drift-diffusion equation with appropriate boundary conditions for the silicon bottom cell. We have a four terminal structure, so our tandem system is totally unconstrained. Then we calculate the efficiency limits of our tandem including several recombination mechanisms such as Auger, SRH and surface recombination. We focus also on the dependence of the results on the band gap of the perovskite and we calculare an optimal band gap to optimize the tandem efficiency. The whole work has been continuously supported by a numerical validation of out analytic model against Silvaco ATLAS which solves drift-diffusion equations using a finite elements method. Our goal is to develop a simpler and cheaper, but accurate model to study such devices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The group analysed some syntactic and phonological phenomena that presuppose the existence of interrelated components within the lexicon, which motivate the assumption that there are some sublexicons within the global lexicon of a speaker. This result is confirmed by experimental findings in neurolinguistics. Hungarian speaking agrammatic aphasics were tested in several ways, the results showing that the sublexicon of closed-class lexical items provides a highly automated complex device for processing surface sentence structure. Analysing Hungarian ellipsis data from a semantic-syntactic aspect, the group established that the lexicon is best conceived of being as split into at least two main sublexicons: the store of semantic-syntactic feature bundles and a separate store of sound forms. On this basis they proposed a format for representing open-class lexical items whose meanings are connected via certain semantic relations. They also proposed a new classification of verbs to account for the contribution of the aspectual reading of the sentence depending on the referential type of the argument, and a new account of the syntactic and semantic behaviour of aspectual prefixes. The partitioned sets of lexical items are sublexicons on phonological grounds. These sublexicons differ in terms of phonotactic grammaticality. The degrees of phonotactic grammaticality are tied up with the problem of psychological reality, of how many degrees of this native speakers are sensitive to. The group developed a hierarchical construction network as an extension of the original General Inheritance Network formalism and this framework was then used as a platform for the implementation of the grammar fragments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mr. Kubon's project was inspired by the growing need for an automatic, syntactic analyser (parser) of Czech, which could be used in the syntactic processing of large amounts of texts. Mr. Kubon notes that such a tool would be very useful, especially in the field of corpus linguistics, where creating a large-scale "tree bank" (a collection of syntactic representations of natural language sentences) is a very important step towards the investigation of the properties of a given language. The work involved in syntactically parsing a whole corpus in order to get a representative set of syntactic structures would be almost inconceivable without the help of some kind of robust (semi)automatic parser. The need for the automatic natural language parser to be robust increases with the size of the linguistic data in the corpus or in any other kind of text which is going to be parsed. Practical experience shows that apart from syntactically correct sentences, there are many sentences which contain a "real" grammatical error. These sentences may be corrected in small-scale texts, but not generally in the whole corpus. In order to be able to complete the overall project, it was necessary to address a number of smaller problems. These were; 1. the adaptation of a suitable formalism able to describe the formal grammar of the system; 2. the definition of the structure of the system's dictionary containing all relevant lexico-syntactic information, and the development of a formal grammar able to robustly parse Czech sentences from the test suite; 3. filling the syntactic dictionary with sample data allowing the system to be tested and debugged during its development (about 1000 words); 4. the development of a set of sample sentences containing a reasonable amount of grammatical and ungrammatical phenomena covering some of the most typical syntactic constructions being used in Czech. Number 3, building a formal grammar, was the main task of the project. The grammar is of course far from complete (Mr. Kubon notes that it is debatable whether any formal grammar describing a natural language may ever be complete), but it covers the most frequent syntactic phenomena, allowing for the representation of a syntactic structure of simple clauses and also the structure of certain types of complex sentences. The stress was not so much on building a wide coverage grammar, but on the description and demonstration of a method. This method uses a similar approach as that of grammar-based grammar checking. The problem of reconstructing the "correct" form of the syntactic representation of a sentence is closely related to the problem of localisation and identification of syntactic errors. Without a precise knowledge of the nature and location of syntactic errors it is not possible to build a reliable estimation of a "correct" syntactic tree. The incremental way of building the grammar used in this project is also an important methodological issue. Experience from previous projects showed that building a grammar by creating a huge block of metarules is more complicated than the incremental method, which begins with the metarules covering most common syntactic phenomena first, and adds less important ones later, especially from the point of view of testing and debugging the grammar. The sample of the syntactic dictionary containing lexico-syntactical information (task 4) now has slightly more than 1000 lexical items representing all classes of words. During the creation of the dictionary it turned out that the task of assigning complete and correct lexico-syntactic information to verbs is a very complicated and time-consuming process which would itself be worth a separate project. The final task undertaken in this project was the development of a method allowing effective testing and debugging of the grammar during the process of its development. The problem of the consistency of new and modified rules of the formal grammar with the rules already existing is one of the crucial problems of every project aiming at the development of a large-scale formal grammar of a natural language. This method allows for the detection of any discrepancy or inconsistency of the grammar with respect to a test-bed of sentences containing all syntactic phenomena covered by the grammar. This is not only the first robust parser of Czech, but also one of the first robust parsers of a Slavic language. Since Slavic languages display a wide range of common features, it is reasonable to claim that this system may serve as a pattern for similar systems in other languages. To transfer the system into any other language it is only necessary to revise the grammar and to change the data contained in the dictionary (but not necessarily the structure of primary lexico-syntactic information). The formalism and methods used in this project can be used in other Slavic languages without substantial changes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Custom modes at a wavelength of 1064 nm were generated with a deformable mirror. The required surface deformations of the adaptive mirror were calculated with the Collins integral written in a matrix formalism. The appropriate size and shape of the actuators as well as the needed stroke were determined to ensure that the surface of the controllable mirror matches the phase front of the custom modes. A semipassive bimorph adaptive mirror with five concentric ring-shaped actuators and one defocus actuator was manufactured and characterised. The surface deformation was modelled with the response functions of the adaptive mirror in terms of an expansion with Zernike polynomials. In the experiments the Nd:YAG laser crystal was quasi-CW pumped to avoid thermally induced distortions of the phase front. The adaptive mirror allows to switch between a super-Gaussian mode, a doughnut mode, a Hermite-Gaussian fundamental beam, multi-mode operation or no oscillation in real time during laser operation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The craze for faster and smaller electronic devices has never gone down and this has always kept researchers on their toes. Following Moore’s law, which states that the number of transistors in a single chip will double in every 18 months, today “30 million transistors can fit into the head of a 1.5 mm diameter pin”. But this miniaturization cannot continue indefinitely due to the ‘quantum leakage’ limit in the thickness of the insulating layer between the gate electrode and the current carrying channel. To bypass this limitation, scientists came up with the idea of using vastly available organic molecules as components in an electronic device. One of the primary challenges in this field was the ability to perform conductance measurements across single molecular junctions. Once that was achieved the focus shifted to a deeper understanding of the underlying physics behind the electron transport across these molecular scale devices. Our initial theoretical approach is based on the conventional Non-Equilibrium Green Function(NEGF) formulation, but the self-energy of the leads is modified to include a weighting factor that ensures negligible current in the absence of a molecular pathway as observed in a Mechanically Controlled Break Junction (MCBJ) experiment. The formulation is then made parameter free by a more careful estimation of the self-energy of the leads. The calculated conductance turns out to be atleast an order more than the experimental values which is probably due to a strong chemical bond at the metal-molecule junction unlike in the experiments. The focus is then shifted to a comparative study of charge transport in molecular wires of different lengths within the same formalism. The molecular wires, composed of a series of organic molecules, are sanwiched between two gold electrodes to make a two terminal device. The length of the wire is increased by sequentially increasing the number of molecules in the wire from 1 to 3. In the low bias regime all the molecular devices are found to exhibit Ohmic behavior. However, the magnitude of conductance decreases exponentially with increase in length of the wire. In the next study, the relative contribution of the ‘in-phase’ and the ‘out-of-phase’ components of the total electronic current under the influence of an external bias is estimated for the wires of three different lengths. In the low bias regime, the ‘out-of-phase’ contribution to the total current is minimal and the ‘in-phase’ elastic tunneling of the electrons is responsible for the net electronic current. This is true irrespective of the length of the molecular spacer. In this regime, the current-voltage characteristics follow Ohm’s law and the conductance of the wires is found to decrease exponentially with increase in length which is in agreement with experimental results. However, after a certain ‘off-set’ voltage, the current increases non-linearly with bias and the ‘out-of-phase’ tunneling of electrons reduces the net current substantially. Subsequently, the interaction of conduction electrons with the vibrational modes as a function of external bias in the three different oligomers is studied since they are one of the main sources of phase-breaking scattering. The number of vibrational modes that couple strongly with the frontier molecular orbitals are found to increase with length of the spacer and the external field. This is consistent with the existence of lowest ‘off-set’ voltage for the longest wire under study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Research and professional practices have the joint aim of re-structuring the preconceived notions of reality. They both want to gain the understanding about social reality. Social workers use their professional competence in order to grasp the reality of their clients, while researchers’ pursuit is to open the secrecies of the research material. Development and research are now so intertwined and inherent in almost all professional practices that making distinctions between practising, developing and researching has become difficult and in many aspects irrelevant. Moving towards research-based practices is possible and it is easily applied within the framework of the qualitative research approach (Dominelli 2005, 235; Humphries 2005, 280). Social work can be understood as acts and speech acts crisscrossing between social workers and clients. When trying to catch the verbal and non-verbal hints of each others’ behaviour, the actors have to do a lot of interpretations in a more or less uncertain mental landscape. Our point of departure is the idea that the study of social work practices requires tools which effectively reveal the internal complexity of social work (see, for example, Adams & Dominelli & Payne 2005, 294 – 295). The boom of qualitative research methodologies in recent decades is associated with much profound the rupture in humanities, which is called the linguistic turn (Rorty 1967). The idea that language is not transparently mediating our perceptions and thoughts about reality, but on the contrary it constitutes it was new and even confusing to many social scientists. Nowadays we have got used to read research reports which have applied different branches of discursive analyses or narratologic or semiotic approaches. Although differences are sophisticated between those orientations they share the idea of the predominance of language. Despite the lively research work of today’s social work and the research-minded atmosphere of social work practice, semiotics has rarely applied in social work research. However, social work as a communicative practice concerns symbols, metaphors and all kinds of the representative structures of language. Those items are at the core of semiotics, the science of signs, and the science which examines people using signs in their mutual interaction and their endeavours to make the sense of the world they live in, their semiosis. When thinking of the practice of social work and doing the research of it, a number of interpretational levels ought to be passed before reaching the research phase in social work. First of all, social workers have to interpret their clients’ situations, which will be recorded in the files. In some very rare cases those past situations will be reflected in discussions or perhaps interviews or put under the scrutiny of some researcher in the future. Each and every new observation adds its own flavour to the mixture of meanings. Social workers have combined their observations with previous experience and professional knowledge, furthermore, the situation on hand also influences the reactions. In addition, the interpretations made by social workers over the course of their daily working routines are never limited to being part of the personal process of the social worker, but are also always inherently cultural. The work aiming at social change is defined by the presence of an initial situation, a specific goal, and the means and ways of achieving it, which are – or which should be – agreed upon by the social worker and the client in situation which is unique and at the same time socially-driven. Because of the inherent plot-based nature of social work, the practices related to it can be analysed as stories (see Dominelli 2005, 234), given, of course, that they are signifying and told by someone. The research of the practices is concentrating on impressions, perceptions, judgements, accounts, documents etc. All these multifarious elements can be scrutinized as textual corpora, but not whatever textual material. In semiotic analysis, the material studied is characterised as verbal or textual and loaded with meanings. We present a contribution of research methodology, semiotic analysis, which has to our mind at least implicitly references to the social work practices. Our examples of semiotic interpretation have been picked up from our dissertations (Laine 2005; Saurama 2002). The data are official documents from the archives of a child welfare agency and transcriptions of the interviews of shelter employees. These data can be defined as stories told by the social workers of what they have seen and felt. The official documents present only fragmentations and they are often written in passive form. (Saurama 2002, 70.) The interviews carried out in the shelters can be described as stories where the narrators are more familiar and known. The material is characterised by the interaction between the interviewer and interviewee. The levels of the story and the telling of the story become apparent when interviews or documents are examined with the use of semiotic tools. The roots of semiotic interpretation can be found in three different branches; the American pragmatism, Saussurean linguistics in Paris and the so called formalism in Moscow and Tartu; however in this paper we are engaged with the so called Parisian School of semiology which prominent figure was A. J. Greimas. The Finnish sociologists Pekka Sulkunen and Jukka Törrönen (1997a; 1997b) have further developed the ideas of Greimas in their studies on socio-semiotics, and we lean on their ideas. In semiotics social reality is conceived as a relationship between subjects, observations, and interpretations and it is seen mediated by natural language which is the most common sign system among human beings (Mounin 1985; de Saussure 2006; Sebeok 1986). Signification is an act of associating an abstract context (signified) to some physical instrument (signifier). These two elements together form the basic concept, the “sign”, which never constitutes any kind of meaning alone. The meaning will be comprised in a distinction process where signs are being related to other signs. In this chain of signs, the meaning becomes diverged from reality. (Greimas 1980, 28; Potter 1996, 70; de Saussure 2006, 46-48.) One interpretative tool is to think of speech as a surface under which deep structures – i.e. values and norms – exist (Greimas & Courtes 1982; Greimas 1987). To our mind semiotics is very much about playing with two different levels of text: the syntagmatic surface which is more or less faithful to the grammar, and the paradigmatic, semantic structure of values and norms hidden in the deeper meanings of interpretations. Semiotic analysis deals precisely with the level of meaning which exists under the surface, but the only way to reach those meanings is through the textual level, the written or spoken text. That is why the tools are needed. In our studies, we have used the semiotic square and the actant analysis. The former is based on the distinctions and the categorisations of meanings, and the latter on opening the plotting of narratives in order to reach the value structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Responses of many real-world problems can only be evaluated perturbed by noise. In order to make an efficient optimization of these problems possible, intelligent optimization strategies successfully coping with noisy evaluations are required. In this article, a comprehensive review of existing kriging-based methods for the optimization of noisy functions is provided. In summary, ten methods for choosing the sequential samples are described using a unified formalism. They are compared on analytical benchmark problems, whereby the usual assumption of homoscedastic Gaussian noise made in the underlying models is meet. Different problem configurations (noise level, maximum number of observations, initial number of observations) and setups (covariance functions, budget, initial sample size) are considered. It is found that the choices of the initial sample size and the covariance function are not critical. The choice of the method, however, can result in significant differences in the performance. In particular, the three most intuitive criteria are found as poor alternatives. Although no criterion is found consistently more efficient than the others, two specialized methods appear more robust on average.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using methods from effective field theory, we have recently developed a novel, systematic framework for the calculation of the cross sections for electroweak gauge-boson production at small and very small transverse momentum q T , in which large logarithms of the scale ratio m V /q T are resummed to all orders. This formalism is applied to the production of Higgs bosons in gluon fusion at the LHC. The production cross section receives logarithmically enhanced corrections from two sources: the running of the hard matching coefficient and the collinear factorization anomaly. The anomaly leads to the dynamical generation of a non-perturbative scale q∗~mHe−const/αs(mH)≈8 GeV, which protects the process from receiving large long-distance hadronic contributions. We present numerical predictions for the transverse-momentum spectrum of Higgs bosons produced at the LHC, finding that it is quite insensitive to hadronic effects.