971 resultados para Formalism
Resumo:
Research and professional practices have the joint aim of re-structuring the preconceived notions of reality. They both want to gain the understanding about social reality. Social workers use their professional competence in order to grasp the reality of their clients, while researchers’ pursuit is to open the secrecies of the research material. Development and research are now so intertwined and inherent in almost all professional practices that making distinctions between practising, developing and researching has become difficult and in many aspects irrelevant. Moving towards research-based practices is possible and it is easily applied within the framework of the qualitative research approach (Dominelli 2005, 235; Humphries 2005, 280). Social work can be understood as acts and speech acts crisscrossing between social workers and clients. When trying to catch the verbal and non-verbal hints of each others’ behaviour, the actors have to do a lot of interpretations in a more or less uncertain mental landscape. Our point of departure is the idea that the study of social work practices requires tools which effectively reveal the internal complexity of social work (see, for example, Adams & Dominelli & Payne 2005, 294 – 295). The boom of qualitative research methodologies in recent decades is associated with much profound the rupture in humanities, which is called the linguistic turn (Rorty 1967). The idea that language is not transparently mediating our perceptions and thoughts about reality, but on the contrary it constitutes it was new and even confusing to many social scientists. Nowadays we have got used to read research reports which have applied different branches of discursive analyses or narratologic or semiotic approaches. Although differences are sophisticated between those orientations they share the idea of the predominance of language. Despite the lively research work of today’s social work and the research-minded atmosphere of social work practice, semiotics has rarely applied in social work research. However, social work as a communicative practice concerns symbols, metaphors and all kinds of the representative structures of language. Those items are at the core of semiotics, the science of signs, and the science which examines people using signs in their mutual interaction and their endeavours to make the sense of the world they live in, their semiosis. When thinking of the practice of social work and doing the research of it, a number of interpretational levels ought to be passed before reaching the research phase in social work. First of all, social workers have to interpret their clients’ situations, which will be recorded in the files. In some very rare cases those past situations will be reflected in discussions or perhaps interviews or put under the scrutiny of some researcher in the future. Each and every new observation adds its own flavour to the mixture of meanings. Social workers have combined their observations with previous experience and professional knowledge, furthermore, the situation on hand also influences the reactions. In addition, the interpretations made by social workers over the course of their daily working routines are never limited to being part of the personal process of the social worker, but are also always inherently cultural. The work aiming at social change is defined by the presence of an initial situation, a specific goal, and the means and ways of achieving it, which are – or which should be – agreed upon by the social worker and the client in situation which is unique and at the same time socially-driven. Because of the inherent plot-based nature of social work, the practices related to it can be analysed as stories (see Dominelli 2005, 234), given, of course, that they are signifying and told by someone. The research of the practices is concentrating on impressions, perceptions, judgements, accounts, documents etc. All these multifarious elements can be scrutinized as textual corpora, but not whatever textual material. In semiotic analysis, the material studied is characterised as verbal or textual and loaded with meanings. We present a contribution of research methodology, semiotic analysis, which has to our mind at least implicitly references to the social work practices. Our examples of semiotic interpretation have been picked up from our dissertations (Laine 2005; Saurama 2002). The data are official documents from the archives of a child welfare agency and transcriptions of the interviews of shelter employees. These data can be defined as stories told by the social workers of what they have seen and felt. The official documents present only fragmentations and they are often written in passive form. (Saurama 2002, 70.) The interviews carried out in the shelters can be described as stories where the narrators are more familiar and known. The material is characterised by the interaction between the interviewer and interviewee. The levels of the story and the telling of the story become apparent when interviews or documents are examined with the use of semiotic tools. The roots of semiotic interpretation can be found in three different branches; the American pragmatism, Saussurean linguistics in Paris and the so called formalism in Moscow and Tartu; however in this paper we are engaged with the so called Parisian School of semiology which prominent figure was A. J. Greimas. The Finnish sociologists Pekka Sulkunen and Jukka Törrönen (1997a; 1997b) have further developed the ideas of Greimas in their studies on socio-semiotics, and we lean on their ideas. In semiotics social reality is conceived as a relationship between subjects, observations, and interpretations and it is seen mediated by natural language which is the most common sign system among human beings (Mounin 1985; de Saussure 2006; Sebeok 1986). Signification is an act of associating an abstract context (signified) to some physical instrument (signifier). These two elements together form the basic concept, the “sign”, which never constitutes any kind of meaning alone. The meaning will be comprised in a distinction process where signs are being related to other signs. In this chain of signs, the meaning becomes diverged from reality. (Greimas 1980, 28; Potter 1996, 70; de Saussure 2006, 46-48.) One interpretative tool is to think of speech as a surface under which deep structures – i.e. values and norms – exist (Greimas & Courtes 1982; Greimas 1987). To our mind semiotics is very much about playing with two different levels of text: the syntagmatic surface which is more or less faithful to the grammar, and the paradigmatic, semantic structure of values and norms hidden in the deeper meanings of interpretations. Semiotic analysis deals precisely with the level of meaning which exists under the surface, but the only way to reach those meanings is through the textual level, the written or spoken text. That is why the tools are needed. In our studies, we have used the semiotic square and the actant analysis. The former is based on the distinctions and the categorisations of meanings, and the latter on opening the plotting of narratives in order to reach the value structures.
Resumo:
Responses of many real-world problems can only be evaluated perturbed by noise. In order to make an efficient optimization of these problems possible, intelligent optimization strategies successfully coping with noisy evaluations are required. In this article, a comprehensive review of existing kriging-based methods for the optimization of noisy functions is provided. In summary, ten methods for choosing the sequential samples are described using a unified formalism. They are compared on analytical benchmark problems, whereby the usual assumption of homoscedastic Gaussian noise made in the underlying models is meet. Different problem configurations (noise level, maximum number of observations, initial number of observations) and setups (covariance functions, budget, initial sample size) are considered. It is found that the choices of the initial sample size and the covariance function are not critical. The choice of the method, however, can result in significant differences in the performance. In particular, the three most intuitive criteria are found as poor alternatives. Although no criterion is found consistently more efficient than the others, two specialized methods appear more robust on average.
Resumo:
Using methods from effective field theory, we have recently developed a novel, systematic framework for the calculation of the cross sections for electroweak gauge-boson production at small and very small transverse momentum q T , in which large logarithms of the scale ratio m V /q T are resummed to all orders. This formalism is applied to the production of Higgs bosons in gluon fusion at the LHC. The production cross section receives logarithmically enhanced corrections from two sources: the running of the hard matching coefficient and the collinear factorization anomaly. The anomaly leads to the dynamical generation of a non-perturbative scale q∗~mHe−const/αs(mH)≈8 GeV, which protects the process from receiving large long-distance hadronic contributions. We present numerical predictions for the transverse-momentum spectrum of Higgs bosons produced at the LHC, finding that it is quite insensitive to hadronic effects.
Resumo:
In e+e− event shapes studies at LEP, two different measurements were sometimes performed: a “calorimetric” measurement using both charged and neutral particles and a “track-based” measurement using just charged particles. Whereas calorimetric measurements are infrared and collinear safe, and therefore calculable in perturbative QCD, track-based measurements necessarily depend on nonperturbative hadronization effects. On the other hand, track-based measurements typically have smaller experimental uncertainties. In this paper, we present the first calculation of the event shape “track thrust” and compare to measurements performed at ALEPH and DELPHI. This calculation is made possible through the recently developed formalism of track functions, which are nonperturbative objects describing how energetic partons fragment into charged hadrons. By incorporating track functions into soft-collinear effective theory, we calculate the distribution for track thrust with next-to-leading logarithmic resummation. Due to a partial cancellation between nonperturbative parameters, the distributions for calorimeter thrust and track thrust are remarkably similar, a feature also seen in LEP data.
Resumo:
By using observables that only depend on charged particles (tracks), one can efficiently suppress pileup contamination at the LHC. Such measurements are not infrared safe in perturbation theory, so any calculation of track-based observables must account for hadronization effects. We develop a formalism to perform these calculations in QCD, by matching partonic cross sections onto new nonperturbative objects called track functions which absorb infrared divergences. The track function Ti(x) describes the energy fraction x of a hard parton i which is converted into charged hadrons. We give a field-theoretic definition of the track function and derive its renormalization group evolution, which is in excellent agreement with the pythia parton shower. We then perform a next-to-leading order calculation of the total energy fraction of charged particles in e+e−→ hadrons. To demonstrate the implications of our framework for the LHC, we match the pythia parton shower onto a set of track functions to describe the track mass distribution in Higgs plus one jet events. We also show how to reduce smearing due to hadronization fluctuations by measuring dimensionless track-based ratios.
Resumo:
We revise the SU(3)-invariant sector of N = 8 supergravity with dyonic SO(8) gaugings. By using the embedding tensor formalism, analytic expressions for the scalar potential, superpotential(s) and fermion mass terms are obtained as a function of the electromagnetic phase ω and the scalars in the theory. Equipped with these results, we explore non-supersymmetric AdS critical points at ω ≠ 0 for which perturbative stability could not be analysed before. The ω-dependent superpotential is then used to derive first-order flow equations and obtain new BPS domain-wall solutions at ω ≠ 0. We numerically look at steepest-descent paths motivated by the (conjectured) RG flows.
Resumo:
The chemical equilibration of heavy quarks in a quark-gluon plasma proceeds via annihilation or pair creation. For temperatures T much below the heavy quark mass M, when kinetically equilibrated heavy quarks move very slowly, the annihilation in the colour singlet channel is enhanced because the quark and antiquark attract each other which increases their probability to meet, whereas the octet contribution is suppressed. This is the so-called Sommerfeld effect. It has not been taken into account in previous calculations of the chemical equilibration rate, which are therefore incomplete for T ≲ α2sM . We compute the leading-order equilibration rate in this regime; there is a large enhancement in the singlet channel, but the rate is dominated by the octet channel, and therefore the total effect is small. In the course of the computation we demonstrate how operators that represent the annihilation of heavy quarks in non-relativistic QCD can be incorporated into the imaginary-time formalism.
Resumo:
Water-conducting faults and fractures were studied in the granite-hosted A¨ spo¨ Hard Rock Laboratory (SE Sweden). On a scale of decametres and larger, steeply dipping faults dominate and contain a variety of different fault rocks (mylonites, cataclasites, fault gouges). On a smaller scale, somewhat less regular fracture patterns were found. Conceptual models of the fault and fracture geometries and of the properties of rock types adjacent to fractures were derived and used as input for the modelling of in situ dipole tracer tests that were conducted in the framework of the Tracer Retention Understanding Experiment (TRUE-1) on a scale of metres. After the identification of all relevant transport and retardation processes, blind predictions of the breakthroughs of conservative to moderately sorbing tracers were calculated and then compared with the experimental data. This paper provides the geological basis and model calibration, while the predictive and inverse modelling work is the topic of the companion paper [J. Contam. Hydrol. 61 (2003) 175]. The TRUE-1 experimental volume is highly fractured and contains the same types of fault rocks and alterations as on the decametric scale. The experimental flow field was modelled on the basis of a 2D-streamtube formalism with an underlying homogeneous and isotropic transmissivity field. Tracer transport was modelled using the dual porosity medium approach, which is linked to the flow model by the flow porosity. Given the substantial pumping rates in the extraction borehole, the transport domain has a maximum width of a few centimetres only. It is concluded that both the uncertainty with regard to the length of individual fractures and the detailed geometry of the network along the flowpath between injection and extraction boreholes are not critical because flow is largely one-dimensional, whether through a single fracture or a network. Process identification and model calibration were based on a single uranine breakthrough (test PDT3), which clearly showed that matrix diffusion had to be included in the model even over the short experimental time scales, evidenced by a characteristic shape of the trailing edge of the breakthrough curve. Using the geological information and therefore considering limited matrix diffusion into a thin fault gouge horizon resulted in a good fit to the experiment. On the other hand, fresh granite was found not to interact noticeably with the tracers over the time scales of the experiments. While fracture-filling gouge materials are very efficient in retarding tracers over short periods of time (hours–days), their volume is very small and, with time progressing, retardation will be dominated by altered wall rock and, finally, by fresh granite. In such rocks, both porosity (and therefore the effective diffusion coefficient) and sorption Kds are more than one order of magnitude smaller compared to fault gouge, thus indicating that long-term retardation is expected to occur but to be less pronounced.
Resumo:
Based on the results from detailed structural and petrological characterisation and on up-scaled laboratory values for sorption and diffusion, blind predictions were made for the STT1 dipole tracer test performed in the Swedish A¨ spo¨ Hard Rock Laboratory. The tracers used were nonsorbing, such as uranine and tritiated water, weakly sorbing 22Na+, 85Sr2 +, 47Ca2 +and more strongly sorbing 86Rb+, 133Ba2 +, 137Cs+. Our model consists of two parts: (1) a flow part based on a 2D-streamtube formalism accounting for the natural background flow field and with an underlying homogeneous and isotropic transmissivity field and (2) a transport part in terms of the dual porosity medium approach which is linked to the flow part by the flow porosity. The calibration of the model was done using the data from one single uranine breakthrough (PDT3). The study clearly showed that matrix diffusion into a highly porous material, fault gouge, had to be included in our model evidenced by the characteristic shape of the breakthrough curve and in line with geological observations. After the disclosure of the measurements, it turned out that, in spite of the simplicity of our model, the prediction for the nonsorbing and weakly sorbing tracers was fairly good. The blind prediction for the more strongly sorbing tracers was in general less accurate. The reason for the good predictions is deemed to be the result of the choice of a model structure strongly based on geological observation. The breakthrough curves were inversely modelled to determine in situ values for the transport parameters and to draw consequences on the model structure applied. For good fits, only one additional fracture family in contact with cataclasite had to be taken into account, but no new transport mechanisms had to be invoked. The in situ values for the effective diffusion coefficient for fault gouge are a factor of 2–15 larger than the laboratory data. For cataclasite, both data sets have values comparable to laboratory data. The extracted Kd values for the weakly sorbing tracers are larger than Swedish laboratory data by a factor of 25–60, but agree within a factor of 3–5 for the more strongly sorbing nuclides. The reason for the inconsistency concerning Kds is the use of fresh granite in the laboratory studies, whereas tracers in the field experiments interact only with fracture fault gouge and to a lesser extent with cataclasite both being mineralogically very different (e.g. clay-bearing) from the intact wall rock.
Resumo:
The large difference between the Planck scale and the electroweak scale, known as the hierarchy problem, is addressed in certain models through the postulate of extra spatial dimensions. A search for evidence of extra spatial dimensions in the diphoton channel has been performed using the full set of proton-proton collisions at root s = 7 TeV recorded in 2011 with the ATLAS detector at the CERN Large Hadron Collider. This dataset corresponds to an integrated luminosity of 4.9 fb(-1). The diphoton invariant mass spectrum is observed to be in good agreement with the Standard Model expectation. In the context of the model proposed by Arkani-Hamed, Dimopoulos and Dvali, 95% confidence level lower limits of between 2.52 and 3.92 TeV are set on the ultraviolet cutoff scale MS depending on the number of extra dimensions and the theoretical formalism used. In the context of the Randall-Sundrum model, a lower limit of 2.06 (1.00) TeV at 95% confidence level is set on the mass of the lightest graviton for couplings of k/(M) over bar (Pl) = 0.1(0.01). Combining with the ATLAS dilepton searches based on the 2011 data, the 95% confidence level lower limit on the Randall-Sundrum graviton mass is further tightened to 2.23 (1.03) TeV for k/(M) over bar (Pl) = 0.1(0.01).
Resumo:
Within the context of exoplanetary atmospheres, we present a comprehensive linear analysis of forced, damped, magnetized shallow water systems, exploring the effects of dimensionality, geometry (Cartesian, pseudo-spherical, and spherical), rotation, magnetic tension, and hydrodynamic and magnetic sources of friction. Across a broad range of conditions, we find that the key governing equation for atmospheres and quantum harmonic oscillators are identical, even when forcing (stellar irradiation), sources of friction (molecular viscosity, Rayleigh drag, and magnetic drag), and magnetic tension are included. The global atmospheric structure is largely controlled by a single key parameter that involves the Rossby and Prandtl numbers. This near-universality breaks down when either molecular viscosity or magnetic drag acts non-uniformly across latitude or a poloidal magnetic field is present, suggesting that these effects will introduce qualitative changes to the familiar chevron-shaped feature witnessed in simulations of atmospheric circulation. We also find that hydrodynamic and magnetic sources of friction have dissimilar phase signatures and affect the flow in fundamentally different ways, implying that using Rayleigh drag to mimic magnetic drag is inaccurate. We exhaustively lay down the theoretical formalism (dispersion relations, governing equations, and time-dependent wave solutions) for a broad suite of models. In all situations, we derive the steady state of an atmosphere, which is relevant to interpreting infrared phase and eclipse maps of exoplanetary atmospheres. We elucidate a pinching effect that confines the atmospheric structure to be near the equator. Our suite of analytical models may be used to develop decisively physical intuition and as a reference point for three-dimensional magnetohydrodynamic simulations of atmospheric circulation.