973 resultados para Galilean covariant formalism


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Social anxiety is a common psychological complaint that can have a significant and long-term negative impact on a child’s social and cognitive development. In the current study, the relationship between sport participation and social anxiety symptoms was investigated. Swiss primary school children (N = 201), parents, and teachers provided information about the children’s social anxiety symptoms, classroom behavior, and sport involvement. Gender differences were observed on social anxiety scores, where girls tended to report higher social anxiety symptoms, as well as on sport activity, where boys engaged in more sport involvement. MANCOVAs with gender as covariant showed no differences in social anxiety symptoms between children involved in an extracurricular sport and those not engaged in sport participation. Nevertheless, children engaged in team sports displayed fewer physical social anxiety symptoms than children involved in individual sports.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We evaluated the near visual acuity of 40 dentists and its improvement by using different magnification devices. The acuity was tested with miniaturized E-optotype tests on a negatoscope under the following conditions: 1. natural visual acuity, 300 mm; 2. single lens loupe, 2×, 250 mm; 3. Galilean loupe, 2.5×, 380 mm; and 4. Keplerian loupe, 4.3×, 400 mm. In part 1, the influence of the magnification devices was investigated for all dentists. The Keplerian loupe obtained the highest visual acuity (4.64), followed by the Galilean loupe (2.43), the single lens loupe (1.42), and natural visual acuity (1.19). For part 2, the dentists were classified according to their age (

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The group analysed some syntactic and phonological phenomena that presuppose the existence of interrelated components within the lexicon, which motivate the assumption that there are some sublexicons within the global lexicon of a speaker. This result is confirmed by experimental findings in neurolinguistics. Hungarian speaking agrammatic aphasics were tested in several ways, the results showing that the sublexicon of closed-class lexical items provides a highly automated complex device for processing surface sentence structure. Analysing Hungarian ellipsis data from a semantic-syntactic aspect, the group established that the lexicon is best conceived of being as split into at least two main sublexicons: the store of semantic-syntactic feature bundles and a separate store of sound forms. On this basis they proposed a format for representing open-class lexical items whose meanings are connected via certain semantic relations. They also proposed a new classification of verbs to account for the contribution of the aspectual reading of the sentence depending on the referential type of the argument, and a new account of the syntactic and semantic behaviour of aspectual prefixes. The partitioned sets of lexical items are sublexicons on phonological grounds. These sublexicons differ in terms of phonotactic grammaticality. The degrees of phonotactic grammaticality are tied up with the problem of psychological reality, of how many degrees of this native speakers are sensitive to. The group developed a hierarchical construction network as an extension of the original General Inheritance Network formalism and this framework was then used as a platform for the implementation of the grammar fragments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mr. Kubon's project was inspired by the growing need for an automatic, syntactic analyser (parser) of Czech, which could be used in the syntactic processing of large amounts of texts. Mr. Kubon notes that such a tool would be very useful, especially in the field of corpus linguistics, where creating a large-scale "tree bank" (a collection of syntactic representations of natural language sentences) is a very important step towards the investigation of the properties of a given language. The work involved in syntactically parsing a whole corpus in order to get a representative set of syntactic structures would be almost inconceivable without the help of some kind of robust (semi)automatic parser. The need for the automatic natural language parser to be robust increases with the size of the linguistic data in the corpus or in any other kind of text which is going to be parsed. Practical experience shows that apart from syntactically correct sentences, there are many sentences which contain a "real" grammatical error. These sentences may be corrected in small-scale texts, but not generally in the whole corpus. In order to be able to complete the overall project, it was necessary to address a number of smaller problems. These were; 1. the adaptation of a suitable formalism able to describe the formal grammar of the system; 2. the definition of the structure of the system's dictionary containing all relevant lexico-syntactic information, and the development of a formal grammar able to robustly parse Czech sentences from the test suite; 3. filling the syntactic dictionary with sample data allowing the system to be tested and debugged during its development (about 1000 words); 4. the development of a set of sample sentences containing a reasonable amount of grammatical and ungrammatical phenomena covering some of the most typical syntactic constructions being used in Czech. Number 3, building a formal grammar, was the main task of the project. The grammar is of course far from complete (Mr. Kubon notes that it is debatable whether any formal grammar describing a natural language may ever be complete), but it covers the most frequent syntactic phenomena, allowing for the representation of a syntactic structure of simple clauses and also the structure of certain types of complex sentences. The stress was not so much on building a wide coverage grammar, but on the description and demonstration of a method. This method uses a similar approach as that of grammar-based grammar checking. The problem of reconstructing the "correct" form of the syntactic representation of a sentence is closely related to the problem of localisation and identification of syntactic errors. Without a precise knowledge of the nature and location of syntactic errors it is not possible to build a reliable estimation of a "correct" syntactic tree. The incremental way of building the grammar used in this project is also an important methodological issue. Experience from previous projects showed that building a grammar by creating a huge block of metarules is more complicated than the incremental method, which begins with the metarules covering most common syntactic phenomena first, and adds less important ones later, especially from the point of view of testing and debugging the grammar. The sample of the syntactic dictionary containing lexico-syntactical information (task 4) now has slightly more than 1000 lexical items representing all classes of words. During the creation of the dictionary it turned out that the task of assigning complete and correct lexico-syntactic information to verbs is a very complicated and time-consuming process which would itself be worth a separate project. The final task undertaken in this project was the development of a method allowing effective testing and debugging of the grammar during the process of its development. The problem of the consistency of new and modified rules of the formal grammar with the rules already existing is one of the crucial problems of every project aiming at the development of a large-scale formal grammar of a natural language. This method allows for the detection of any discrepancy or inconsistency of the grammar with respect to a test-bed of sentences containing all syntactic phenomena covered by the grammar. This is not only the first robust parser of Czech, but also one of the first robust parsers of a Slavic language. Since Slavic languages display a wide range of common features, it is reasonable to claim that this system may serve as a pattern for similar systems in other languages. To transfer the system into any other language it is only necessary to revise the grammar and to change the data contained in the dictionary (but not necessarily the structure of primary lexico-syntactic information). The formalism and methods used in this project can be used in other Slavic languages without substantial changes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Custom modes at a wavelength of 1064 nm were generated with a deformable mirror. The required surface deformations of the adaptive mirror were calculated with the Collins integral written in a matrix formalism. The appropriate size and shape of the actuators as well as the needed stroke were determined to ensure that the surface of the controllable mirror matches the phase front of the custom modes. A semipassive bimorph adaptive mirror with five concentric ring-shaped actuators and one defocus actuator was manufactured and characterised. The surface deformation was modelled with the response functions of the adaptive mirror in terms of an expansion with Zernike polynomials. In the experiments the Nd:YAG laser crystal was quasi-CW pumped to avoid thermally induced distortions of the phase front. The adaptive mirror allows to switch between a super-Gaussian mode, a doughnut mode, a Hermite-Gaussian fundamental beam, multi-mode operation or no oscillation in real time during laser operation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: The objective of this study was to compare the effects of a commercial CPC (cetylpyridinium chloride) mouthrinse containing 0.07% CPC (Crest® ProHealth Rinse) versus those provided by a commercial essential flavor oil mouthrinse (Listerine® Antiseptic) on dental plaque accumulation and prevention of gingivitis in an unsupervised 6 month clinical study. Methods: This was a double blind, 6-month, parallel group, positive controlled study involving 128 subjects who were balanced and randomly assigned to either positive control (essential oil) or experimental (CPC) mouthrinse treatment groups. The CPC mouthrinse passed proposed performance assays by the FDA for an OTC CPC mouthrinse. At baseline, subjects received a dental prophylaxis and began unsupervised rinsing twice daily with 20 ml. of their assigned mouthrinse for 30 seconds after brushing their teeth for 1 min. Subjects were assessed for gingivitis and gingival bleeding by the Gingival Index (GI) of Loe and Silness and plaque by the Silness and Loe Plaque Index (PI) at baseline and after 3 and 6 months of product use. Oral soft tissue health was also assessed. Microbiological samples were also taken for community profiling by the DNA-DNA checkerboard method. Results: Results show that after 3 and 6 months use there was no significant difference (p = 0.05) between the CPC and essential oil mouthrinse treatment groups for overall gingivitis status, gingival bleeding, and plaque. At 6 months the covariant (baseline) –adjusted mean GI and bleeding sites numbers for the CPC and essential oil mouthrinses were 0.52 and 0.53 and 5.5 and 6.3, respectively. Both mouth rinses were well tolerated by the subjects. Microbiological community profiles were similar for the 2 treatment group. Conclusion: This study shows that the 0.07% CPC mouthrinse can provide similar plaque and gingivitis benefits to those provided by an essential oil mouthrinse over a 6 month period.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: To compare the effects of an experimental mouth rinse containing 0.07% cetylpyridinium chloride (CPC) (Crest Pro-Health) with those provided by a commercially available mouth rinse containing essential oils (EOs) (Listerine) on dental plaque accumulation and prevention of gingivitis in an unsupervised 6-month randomized clinical trial. MATERIAL AND METHODS: This double-blind, 6-month, parallel group, positively controlled study involved 151 subjects balanced and randomly assigned to either positive control (EO) or experimental (CPC) mouth rinse treatment groups. At baseline, subjects received a dental prophylaxis procedure and began unsupervised rinsing twice a day with 20 ml of their assigned mouthwash for 30 s after brushing their teeth for 1 min. Subjects were assessed for gingivitis and gingival bleeding by the Gingival index (GI) of Löe ; Silness (1963) and plaque by the Silness ; Löe (1964) Plaque index at baseline and after 3 and 6 months of rinsing. At 3 and 6 months, oral soft tissue health was assessed. Microbiological samples were also taken for community profiling by the DNA checkerboard method. RESULTS: Results show that after 3 and 6 months of rinsing, there were no significant differences (p=0.05) between the experimental (CPC) and the positive control mouth rinse treatment groups for overall gingivitis status, gingival bleeding, and plaque accumulation. At 6 months, the covariant (baseline) adjusted mean GI and bleeding sites percentages for the CPC and the EO rinses were 0.52 and 0.53 and 8.7 and 9.3, respectively. Both mouth rinses were well tolerated by the subjects. Microbiological community profiles were similar for the two treatment groups. Statistically, a significant greater reduction in bleeding sites was observed for the CPC rinse versus the EO rinse. CONCLUSION: The essential findings of this study indicated that there was no statistically significant difference in the anti-plaque and anti-gingivitis benefits between the experimental CPC mouth rinse and the positive control EO mouth rinse over a 6-month period.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The craze for faster and smaller electronic devices has never gone down and this has always kept researchers on their toes. Following Moore’s law, which states that the number of transistors in a single chip will double in every 18 months, today “30 million transistors can fit into the head of a 1.5 mm diameter pin”. But this miniaturization cannot continue indefinitely due to the ‘quantum leakage’ limit in the thickness of the insulating layer between the gate electrode and the current carrying channel. To bypass this limitation, scientists came up with the idea of using vastly available organic molecules as components in an electronic device. One of the primary challenges in this field was the ability to perform conductance measurements across single molecular junctions. Once that was achieved the focus shifted to a deeper understanding of the underlying physics behind the electron transport across these molecular scale devices. Our initial theoretical approach is based on the conventional Non-Equilibrium Green Function(NEGF) formulation, but the self-energy of the leads is modified to include a weighting factor that ensures negligible current in the absence of a molecular pathway as observed in a Mechanically Controlled Break Junction (MCBJ) experiment. The formulation is then made parameter free by a more careful estimation of the self-energy of the leads. The calculated conductance turns out to be atleast an order more than the experimental values which is probably due to a strong chemical bond at the metal-molecule junction unlike in the experiments. The focus is then shifted to a comparative study of charge transport in molecular wires of different lengths within the same formalism. The molecular wires, composed of a series of organic molecules, are sanwiched between two gold electrodes to make a two terminal device. The length of the wire is increased by sequentially increasing the number of molecules in the wire from 1 to 3. In the low bias regime all the molecular devices are found to exhibit Ohmic behavior. However, the magnitude of conductance decreases exponentially with increase in length of the wire. In the next study, the relative contribution of the ‘in-phase’ and the ‘out-of-phase’ components of the total electronic current under the influence of an external bias is estimated for the wires of three different lengths. In the low bias regime, the ‘out-of-phase’ contribution to the total current is minimal and the ‘in-phase’ elastic tunneling of the electrons is responsible for the net electronic current. This is true irrespective of the length of the molecular spacer. In this regime, the current-voltage characteristics follow Ohm’s law and the conductance of the wires is found to decrease exponentially with increase in length which is in agreement with experimental results. However, after a certain ‘off-set’ voltage, the current increases non-linearly with bias and the ‘out-of-phase’ tunneling of electrons reduces the net current substantially. Subsequently, the interaction of conduction electrons with the vibrational modes as a function of external bias in the three different oligomers is studied since they are one of the main sources of phase-breaking scattering. The number of vibrational modes that couple strongly with the frontier molecular orbitals are found to increase with length of the spacer and the external field. This is consistent with the existence of lowest ‘off-set’ voltage for the longest wire under study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Research and professional practices have the joint aim of re-structuring the preconceived notions of reality. They both want to gain the understanding about social reality. Social workers use their professional competence in order to grasp the reality of their clients, while researchers’ pursuit is to open the secrecies of the research material. Development and research are now so intertwined and inherent in almost all professional practices that making distinctions between practising, developing and researching has become difficult and in many aspects irrelevant. Moving towards research-based practices is possible and it is easily applied within the framework of the qualitative research approach (Dominelli 2005, 235; Humphries 2005, 280). Social work can be understood as acts and speech acts crisscrossing between social workers and clients. When trying to catch the verbal and non-verbal hints of each others’ behaviour, the actors have to do a lot of interpretations in a more or less uncertain mental landscape. Our point of departure is the idea that the study of social work practices requires tools which effectively reveal the internal complexity of social work (see, for example, Adams & Dominelli & Payne 2005, 294 – 295). The boom of qualitative research methodologies in recent decades is associated with much profound the rupture in humanities, which is called the linguistic turn (Rorty 1967). The idea that language is not transparently mediating our perceptions and thoughts about reality, but on the contrary it constitutes it was new and even confusing to many social scientists. Nowadays we have got used to read research reports which have applied different branches of discursive analyses or narratologic or semiotic approaches. Although differences are sophisticated between those orientations they share the idea of the predominance of language. Despite the lively research work of today’s social work and the research-minded atmosphere of social work practice, semiotics has rarely applied in social work research. However, social work as a communicative practice concerns symbols, metaphors and all kinds of the representative structures of language. Those items are at the core of semiotics, the science of signs, and the science which examines people using signs in their mutual interaction and their endeavours to make the sense of the world they live in, their semiosis. When thinking of the practice of social work and doing the research of it, a number of interpretational levels ought to be passed before reaching the research phase in social work. First of all, social workers have to interpret their clients’ situations, which will be recorded in the files. In some very rare cases those past situations will be reflected in discussions or perhaps interviews or put under the scrutiny of some researcher in the future. Each and every new observation adds its own flavour to the mixture of meanings. Social workers have combined their observations with previous experience and professional knowledge, furthermore, the situation on hand also influences the reactions. In addition, the interpretations made by social workers over the course of their daily working routines are never limited to being part of the personal process of the social worker, but are also always inherently cultural. The work aiming at social change is defined by the presence of an initial situation, a specific goal, and the means and ways of achieving it, which are – or which should be – agreed upon by the social worker and the client in situation which is unique and at the same time socially-driven. Because of the inherent plot-based nature of social work, the practices related to it can be analysed as stories (see Dominelli 2005, 234), given, of course, that they are signifying and told by someone. The research of the practices is concentrating on impressions, perceptions, judgements, accounts, documents etc. All these multifarious elements can be scrutinized as textual corpora, but not whatever textual material. In semiotic analysis, the material studied is characterised as verbal or textual and loaded with meanings. We present a contribution of research methodology, semiotic analysis, which has to our mind at least implicitly references to the social work practices. Our examples of semiotic interpretation have been picked up from our dissertations (Laine 2005; Saurama 2002). The data are official documents from the archives of a child welfare agency and transcriptions of the interviews of shelter employees. These data can be defined as stories told by the social workers of what they have seen and felt. The official documents present only fragmentations and they are often written in passive form. (Saurama 2002, 70.) The interviews carried out in the shelters can be described as stories where the narrators are more familiar and known. The material is characterised by the interaction between the interviewer and interviewee. The levels of the story and the telling of the story become apparent when interviews or documents are examined with the use of semiotic tools. The roots of semiotic interpretation can be found in three different branches; the American pragmatism, Saussurean linguistics in Paris and the so called formalism in Moscow and Tartu; however in this paper we are engaged with the so called Parisian School of semiology which prominent figure was A. J. Greimas. The Finnish sociologists Pekka Sulkunen and Jukka Törrönen (1997a; 1997b) have further developed the ideas of Greimas in their studies on socio-semiotics, and we lean on their ideas. In semiotics social reality is conceived as a relationship between subjects, observations, and interpretations and it is seen mediated by natural language which is the most common sign system among human beings (Mounin 1985; de Saussure 2006; Sebeok 1986). Signification is an act of associating an abstract context (signified) to some physical instrument (signifier). These two elements together form the basic concept, the “sign”, which never constitutes any kind of meaning alone. The meaning will be comprised in a distinction process where signs are being related to other signs. In this chain of signs, the meaning becomes diverged from reality. (Greimas 1980, 28; Potter 1996, 70; de Saussure 2006, 46-48.) One interpretative tool is to think of speech as a surface under which deep structures – i.e. values and norms – exist (Greimas & Courtes 1982; Greimas 1987). To our mind semiotics is very much about playing with two different levels of text: the syntagmatic surface which is more or less faithful to the grammar, and the paradigmatic, semantic structure of values and norms hidden in the deeper meanings of interpretations. Semiotic analysis deals precisely with the level of meaning which exists under the surface, but the only way to reach those meanings is through the textual level, the written or spoken text. That is why the tools are needed. In our studies, we have used the semiotic square and the actant analysis. The former is based on the distinctions and the categorisations of meanings, and the latter on opening the plotting of narratives in order to reach the value structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVES This study examined the near visual acuity of dentists in relation to age and magnification under simulated clinical conditions. MATERIALS AND METHODS Miniaturized visual tests were performed in posterior teeth of a dental phantom head in a simulated clinical setting (dental chair, operating lamp, dental mirror). The visual acuity of 40 dentists was measured under the following conditions: (1) natural visual acuity, distance of 300 mm; (2) natural visual acuity, free choice of distance; (3) Galilean loupes, magnification of ×2.5; (4) Keplerian loupes, ×4.3; (5) operating microscope, ×4, integrated light; (6) operating microscope, ×6.4, integrated light. RESULTS The visual acuity varied widely between individuals and was significantly lower in the group ≥40 years of age (p < 0.001). Significant differences were found between all tested conditions (p < 0.01). Furthermore, a correlation between visual acuity and age was found for all conditions. The performance with the microscope was better than with loupes even with comparable magnification factors. Some dentists had a better visual acuity without optical aids than others with Galilean loupes. CONCLUSIONS Near visual acuity under simulated clinical conditions varies widely between individuals and decreases throughout life. Visual deficiencies can be compensated for with optical aids. CLINICAL RELEVANCE Newly developed miniaturized vision tests have allowed, in a clinically relevant way, to evaluate the influence of magnification and age on the near visual acuity of dentists.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Responses of many real-world problems can only be evaluated perturbed by noise. In order to make an efficient optimization of these problems possible, intelligent optimization strategies successfully coping with noisy evaluations are required. In this article, a comprehensive review of existing kriging-based methods for the optimization of noisy functions is provided. In summary, ten methods for choosing the sequential samples are described using a unified formalism. They are compared on analytical benchmark problems, whereby the usual assumption of homoscedastic Gaussian noise made in the underlying models is meet. Different problem configurations (noise level, maximum number of observations, initial number of observations) and setups (covariance functions, budget, initial sample size) are considered. It is found that the choices of the initial sample size and the covariance function are not critical. The choice of the method, however, can result in significant differences in the performance. In particular, the three most intuitive criteria are found as poor alternatives. Although no criterion is found consistently more efficient than the others, two specialized methods appear more robust on average.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The production rate of right-handed neutrinos from a Standard Model plasma at a temperature above a hundred GeV is evaluated up to NLO in Standard Model couplings. The results apply in the so-called relativistic regime, referring parametrically to a mass M ~ πT, generalizing thereby previous NLO results which only apply in the non-relativistic regime M ≫ πT. The non-relativistic expansion is observed to converge for M ≳ 15T, but the smallness of any loop corrections allows it to be used in practice already for M ≳ 4T. In the latter regime any non-covariant dependence of the differential rate on the spatial momentum is shown to be mild. The loop expansion breaks down in the ultrarelativistic regime M ≪ πT, but after a simple mass resummation it nevertheless extrapolates reasonably well towards a result obtained previously through complete LPM resummation, apparently confirming a strong enhancement of the rate at high temperatures (which facilitates chemical equilibration). When combined with other ingredients the results may help to improve upon the accuracy of leptogenesis computations operating above the electroweak scale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using methods from effective field theory, we have recently developed a novel, systematic framework for the calculation of the cross sections for electroweak gauge-boson production at small and very small transverse momentum q T , in which large logarithms of the scale ratio m V /q T are resummed to all orders. This formalism is applied to the production of Higgs bosons in gluon fusion at the LHC. The production cross section receives logarithmically enhanced corrections from two sources: the running of the hard matching coefficient and the collinear factorization anomaly. The anomaly leads to the dynamical generation of a non-perturbative scale q∗~mHe−const/αs(mH)≈8 GeV, which protects the process from receiving large long-distance hadronic contributions. We present numerical predictions for the transverse-momentum spectrum of Higgs bosons produced at the LHC, finding that it is quite insensitive to hadronic effects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In e+e− event shapes studies at LEP, two different measurements were sometimes performed: a “calorimetric” measurement using both charged and neutral particles and a “track-based” measurement using just charged particles. Whereas calorimetric measurements are infrared and collinear safe, and therefore calculable in perturbative QCD, track-based measurements necessarily depend on nonperturbative hadronization effects. On the other hand, track-based measurements typically have smaller experimental uncertainties. In this paper, we present the first calculation of the event shape “track thrust” and compare to measurements performed at ALEPH and DELPHI. This calculation is made possible through the recently developed formalism of track functions, which are nonperturbative objects describing how energetic partons fragment into charged hadrons. By incorporating track functions into soft-collinear effective theory, we calculate the distribution for track thrust with next-to-leading logarithmic resummation. Due to a partial cancellation between nonperturbative parameters, the distributions for calorimeter thrust and track thrust are remarkably similar, a feature also seen in LEP data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

By using observables that only depend on charged particles (tracks), one can efficiently suppress pileup contamination at the LHC. Such measurements are not infrared safe in perturbation theory, so any calculation of track-based observables must account for hadronization effects. We develop a formalism to perform these calculations in QCD, by matching partonic cross sections onto new nonperturbative objects called track functions which absorb infrared divergences. The track function Ti(x) describes the energy fraction x of a hard parton i which is converted into charged hadrons. We give a field-theoretic definition of the track function and derive its renormalization group evolution, which is in excellent agreement with the pythia parton shower. We then perform a next-to-leading order calculation of the total energy fraction of charged particles in e+e−→ hadrons. To demonstrate the implications of our framework for the LHC, we match the pythia parton shower onto a set of track functions to describe the track mass distribution in Higgs plus one jet events. We also show how to reduce smearing due to hadronization fluctuations by measuring dimensionless track-based ratios.