970 resultados para Classical testing theory
Resumo:
Classical transport theory is employed to analyze the hot quark-gluon plasma at the leading order in the coupling constant. A condition on the (covariantly conserved) color current is obtained. From this condition, the generating functional of hard thermal loops with an arbitrary number of soft external bosonic legs can be derived. Our approach, besides being more direct than alternative ones, shows that hard thermal loops are essentially classical.
Resumo:
Planning with partial observability can be formulated as a non-deterministic search problem in belief space. The problem is harder than classical planning as keeping track of beliefs is harder than keeping track of states, and searching for action policies is harder than searching for action sequences. In this work, we develop a framework for partial observability that avoids these limitations and leads to a planner that scales up to larger problems. For this, the class of problems is restricted to those in which 1) the non-unary clauses representing the uncertainty about the initial situation are nvariant, and 2) variables that are hidden in the initial situation do not appear in the body of conditional effects, which are all assumed to be deterministic. We show that such problems can be translated in linear time into equivalent fully observable non-deterministic planning problems, and that an slight extension of this translation renders the problem solvable by means of classical planners. The whole approach is sound and complete provided that in addition, the state-space is connected. Experiments are also reported.
Resumo:
The purpose of this paper was to evaluate the psychometric properties of a stage-specific selfefficacy scale for physical activity with classical test theory (CTT), confirmatory factor analysis (CFA) and item response modeling (IRM). Women who enrolled in the Women On The Move study completed a 20-item stage-specific self-efficacy scale developed for this study [n = 226, 51.1% African-American and 48.9% Hispanic women, mean age = 49.2 (67.0) years, mean body mass index = 29.7 (66.4)]. Three analyses were conducted: (i) a CTT item analysis, (ii) a CFA to validate the factor structure and (iii) an IRM analysis. The CTT item analysis and the CFA results showed that the scale had high internal consistency (ranging from 0.76 to 0.93) and a strong factor structure. Results also showed that the scale could be improved by modifying or eliminating some of the existing items without significantly altering the content of the scale. The IRM results also showed that the scale had few items that targeted high self-efficacy and the stage-specific assumption underlying the scale was rejected. In addition, the IRM analyses found that the five-point response format functioned more like a four-point response format. Overall, employing multiple methods to assess the psychometric properties of the stage-specific self-efficacy scale demonstrated the complimentary nature of these methods and it highlighted the strengths and weaknesses of this scale.
Resumo:
2000 Mathematics Subject Classification: 13N15, 13A50, 16W25.
Resumo:
2000 Mathematics Subject Classification: 13N15, 13A50, 13F20.
Resumo:
OBJECTIVE: To review the psychometric properties of the Beck Depression Inventory-II (BDI-II) as a self-report measure of depression in a variety of settings and populations. METHODS: Relevant studies of the BDI-II were retrieved through a search of electronic databases, a hand search, and contact with authors. Retained studies (k = 118) were allocated into three groups: non-clinical, psychiatric/institutionalized, and medical samples. RESULTS: The internal consistency was described as around 0.9 and the retest reliability ranged from 0.73 to 0.96. The correlation between BDI-II and the Beck Depression Inventory (BDI-I) was high and substantial overlap with measures of depression and anxiety was reported. The criterion-based validity showed good sensitivity and specificity for detecting depression in comparison to the adopted gold standard. However, the cutoff score to screen for depression varied according to the type of sample. Factor analysis showed a robust dimension of general depression composed by two constructs: cognitive-affective and somatic-vegetative. CONCLUSIONS: The BDI-II is a relevant psychometric instrument, showing high reliability, capacity to discriminate between depressed and non-depressed subjects, and improved concurrent, content, and structural validity. Based on available psychometric evidence, the BDI-II can be viewed as a cost-effective questionnaire for measuring the severity of depression, with broad applicability for research and clinical practice worldwide.
Resumo:
We consider the general response theory recently proposed by Ruelle for describing the impact of small perturbations to the non-equilibrium steady states resulting from Axiom A dynamical systems. We show that the causality of the response functions entails the possibility of writing a set of Kramers-Kronig (K-K) relations for the corresponding susceptibilities at all orders of nonlinearity. Nonetheless, only a special class of directly observable susceptibilities obey K-K relations. Specific results are provided for the case of arbitrary order harmonic response, which allows for a very comprehensive K-K analysis and the establishment of sum rules connecting the asymptotic behavior of the harmonic generation susceptibility to the short-time response of the perturbed system. These results set in a more general theoretical framework previous findings obtained for optical systems and simple mechanical models, and shed light on the very general impact of considering the principle of causality for testing self-consistency: the described dispersion relations constitute unavoidable benchmarks that any experimental and model generated dataset must obey. The theory exposed in the present paper is dual to the time-dependent theory of perturbations to equilibrium states and to non-equilibrium steady states, and has in principle similar range of applicability and limitations. In order to connect the equilibrium and the non equilibrium steady state case, we show how to rewrite the classical response theory by Kubo so that response functions formally identical to those proposed by Ruelle, apart from the measure involved in the phase space integration, are obtained. These results, taking into account the chaotic hypothesis by Gallavotti and Cohen, might be relevant in several fields, including climate research. In particular, whereas the fluctuation-dissipation theorem does not work for non-equilibrium systems, because of the non-equivalence between internal and external fluctuations, K-K relations might be robust tools for the definition of a self-consistent theory of climate change.
Resumo:
Viime vuosikymmenien aikana kommunikaatioteknologiat ovat kehittyneet erittäin paljon. Uusia verkkoja, liityntätekniikoita, protokollia ja päätelaitteita on luotu alati kehittyvällä vauhdilla, eikä hidastumisen merkkejä ole näkyvissä. Varsinkin mobiilisovellukset ovat kasvattaneet markkinaosuuksiaan viime aikoina. Unlicensed MobileAccess (UMA) on uusi liityntätekniikka mobiilipäätelaitteille, joka mahdollistaa liitynnän GSM- runkoverkkoon WLAN- tai Bluetooth - tekniikoiden avulla. Tämä diplomityö keskittyy UMAan liittyviin teknologioihin, joita tarkastellaan lähemmin ensimmäisissä kappaleissa. Tavoitteena on esitellä, mitä UMA merkitsee, ja kuinka eri tekniikoita voidaan soveltaa sen toteutuksissa. Ennenkuin uusia teknologioita voidaan soveltaa kaupallisesti, täytyy niiden olla kokonaisvaltaisesti testattuja. Erilaisia testausmenetelmiä sovelletaan laitteistonja ohjelmiston testaukseen, mutta tavoite on kuitenkin sama, eli vähentää testattavan tuotteen epäluotettavuutta ja lisätä sen laatua. Vaikka UMA käsittääkin pääasiassa jo olemassa olevia tekniikoita, tuo se silti mukanaan uuden verkkoelementin ja kaksi uutta kommunikaatioprotokollaa. Ennen kuin mitään UMAa tukevia ratkaisuja voidaan tuoda markkinoille, monia erilaisia testausmenetelmiä on suoritettava, jotta varmistutaan uuden tuotteen oikeasta toiminnallisuudesta. Koska tämä diplomityö käsittelee uutta tekniikkaa, on myös testausmenetelmien yleisen testausteorian käsittelemiselle varattu oma kappale. Kappale esittelee erilaisia testauksen näkökulmia ja niihin perustuen rakennetaan myös testausohjelmisto. Tavoitteena on luoda ohjelmisto, jota voidaan käyttää UMA-RR protokollan toiminnan varmentamiseen kohdeympäristössä.
Resumo:
Today's business environment has become increasingly unexpected and fast changing because of the global competition. This new environment requires the companies to organize their control differently, e.g. by logistic process thinking. Logistic process thinking in software engineering applies the principles of production process to immaterial products. Processes must be optimized, so that every phase adds value to the customer, and the lead times can be cut shorter to meet the new customer requirements. The purpose of this thesis is to examine and optimize the testing processes of software engineering concentrating on module testing, functional testing and their interface. The concept of logistic process thinking is introduced through production process, value added model and process management. Also theory of testing based on literature is presented, concentrating on module testing and functional testing. The testing processes of the Case Company are presented together with the project models in which they are implemented. The real life practices in module testing and functional testing and their interface are examined through interviews. These practices are analyzed against the processes and the testing theory, through which ideas for optimizing the testing process are introduced. The project world of the Case Company is also introduced together with two example testing projects in different life cycle phases. The examples give a view of how much effort of the project is put in different types of testing.
Resumo:
This paper is a historical companion to a previous one, in which it was studied the so-called abstract Galois theory as formulated by the Portuguese mathematician José Sebastião e Silva (see da Costa, Rodrigues (2007)). Our purpose is to present some applications of abstract Galois theory to higher-order model theory, to discuss Silva's notion of expressibility and to outline a classical Galois theory that can be obtained inside the two versions of the abstract theory, those of Mark Krasner and of Silva. Some comments are made on the universal theory of (set-theoretic) structures.
Resumo:
Classical counterinsurgency theory – written before the 19th century – has generally strongly opposed atrocities, as have theoreticians writing on how to conduct insurgencies. For a variety of reasons – ranging from pragmatic to religious or humanitarian – theoreticians of both groups have particularly argued for the lenient treatment of civilians associated with the enemy camp, although there is a marked pattern of exceptions, for example, where heretics or populations of cities refusing to surrender to besieging armies are concerned. And yet atrocities – defined here as acts of violence against the unarmed (non-combatants, or wounded or imprisoned enemy soldiers), or needlessly painful and/or humiliating treatment of enemy combatants, beyond any action needed to incapacitate or disarm them – occur frequently in small wars. Examples abound where these exhortations have been ignored, both by forces engaged in an insurgency and by forces trying to put down a rebellion. Why have so many atrocities been committed in war if so many arguments have been put forward against them? This is the basic puzzle for which the individual contributions to this special issue are seeking to find tentative answers, drawing on case studies.
Resumo:
This perspectives paper and its associated commentaries examine Alan Rugman's conceptual contribution to international business scholarship. Most significantly, we highlight Rugman's version of internalization theory as an approach that integrates transaction cost economics and ‘classical’ internalization theory with elements from the resource-based view, such that it is especially relevant to strategic management. In reviewing his oeuvre, we also offer observations on his ideas for ‘new internalization theory’. We classify his other novel insights into four categories: Network Multinationals; National competitiveness; Development and public policy; and Emerging Economy MNEs. This special section offers multiple views on how his work informed the larger academic debate and considers how these ideas might evolve in the longer term.
Resumo:
High flexural strength and stiffness can be achieved by forming a thin panel into a wave shape perpendicular to the bending direction. The use of corrugated shapes to gain flexural strength and stiffness is common in metal and reinforced plastic products. However, there is no commercial production of corrugated wood composite panels. This research focuses on the application of corrugated shapes to wood strand composite panels. Beam theory, classical plate theory and finite element models were used to analyze the bending behavior of corrugated panels. The most promising shallow corrugated panel configuration was identified based on structural performance and compatibility with construction practices. The corrugation profile selected has a wavelength equal to 8”, a channel depth equal to ¾”, a sidewall angle equal to 45 degrees and a panel thickness equal to 3/8”. 16”x16” panels were produced using random mats and 3-layer aligned mats with surface flakes parallel to the channels. Strong axis and weak axis bending tests were conducted. The test results indicate that flake orientation has little effect on the strong axis bending stiffness. The 3/8” thick random mat corrugated panels exhibit bending stiffness (400,000 lbs-in2/ft) and bending strength (3,000 in-lbs/ft) higher than 23/32” or 3/4” thick APA Rated Sturd-I-Floor with a 24” o.c. span rating. Shear and bearing test results show that the corrugated panel can withstand more than 50 psf of uniform load at 48” joist spacings. Molding trials on 16”x16” panels provided data for full size panel production. Full size 4’x8’ shallow corrugated panels were produced with only minor changes to the current oriented strandboard manufacturing process. Panel testing was done to simulate floor loading during construction, without a top underlayment layer, and during occupancy, with an underlayment over the panel to form a composite deck. Flexural tests were performed in single-span and two-span bending with line loads applied at mid-span. The average strong axis bending stiffness and bending strength of the full size corrugated panels (without the underlayment) were over 400,000 lbs-in2/ft and 3,000 in-lbs/ft, respectively. The composite deck system, which consisted of an OSB sheathing (15/32” thick) nailed-glued (using 3d ringshank nails and AFG-01 subfloor adhesive) to the corrugated subfloor achieved about 60% of the full composite stiffness resulting in about 3 times the bending stiffness of the corrugated subfloor (1,250,000 lbs-in2/ft). Based on the LRFD design criteria, the corrugated composite floor system can carry 40 psf of unfactored uniform loads, limited by the L/480 deflection limit state, at 48” joist spacings. Four 10-ft long composite T-beam specimens were built and tested for the composite action and the load sharing between a 24” wide corrugated deck system and the supporting I-joist. The average bending stiffness of the composite T-beam was 1.6 times higher than the bending stiffness of the I-joist. A 8-ft x 12-ft mock up floor was built to evaluate construction procedures. The assembly of the composite floor system is relatively simple. The corrugated composite floor system might be able to offset the cheaper labor costs of the single-layer Sturd-IFloor through the material savings. However, no conclusive result can be drawn, in terms of the construction costs, at this point without an in depth cost analysis of the two systems. The shallow corrugated composite floor system might be a potential alternative to the Sturd-I-Floor in the near future because of the excellent flexural stiffness provided.
Resumo:
The accuracy of simulating the aerodynamics and structural properties of the blades is crucial in the wind-turbine technology. Hence the models used to implement these features need to be very precise and their level of detailing needs to be high. With the variety of blade designs being developed the models should be versatile enough to adapt to the changes required by every design. We are going to implement a combination of numerical models which are associated with the structural and the aerodynamic part of the simulation using the computational power of a parallel HPC cluster. The structural part models the heterogeneous internal structure of the beam based on a novel implementation of the Generalized Timoshenko Beam Model Technique.. Using this technique the 3-D structure of the blade is reduced into a 1-D beam which is asymptotically equivalent. This reduces the computational cost of the model without compromising its accuracy. This structural model interacts with the Flow model which is a modified version of the Blade Element Momentum Theory. The modified version of the BEM accounts for the large deflections of the blade and also considers the pre-defined structure of the blade. The coning, sweeping of the blade, tilt of the nacelle and the twist of the sections along the blade length are all computed by the model which aren’t considered in the classical BEM theory. Each of these two models provides feedback to the other and the interactive computations lead to more accurate outputs. We successfully implemented the computational models to analyze and simulate the structural and aerodynamic aspects of the blades. The interactive nature of these models and their ability to recompute data using the feedback from each other makes this code more efficient than the commercial codes available. In this thesis we start off with the verification of these models by testing it on the well-known benchmark blade for the NREL-5MW Reference Wind Turbine, an alternative fixed-speed stall-controlled blade design proposed by Delft University, and a novel alternative design that we proposed for a variable-speed stall-controlled turbine, which offers the potential for more uniform power control and improved annual energy production.. To optimize the power output of the stall-controlled blade we modify the existing designs and study their behavior using the aforementioned aero elastic model.
Resumo:
The position effect describes the influence of just-completed items in a psychological scale on subsequent items. This effect has been repeatedly reported for psychometric reasoning scales and is assumed to reflect implicit learning during testing. One way to identify the position effect is fixed-links modeling. With this approach, two latent variables are derived from the test items. Factor loadings of one latent variable are fixed to 1 for all items to represent ability-related variance. Factor loadings on the second latent variable increase from the first to the last item describing the position effect. Previous studies using fixed-links modeling on the position effect investigated reasoning scales constructed in accordance with classical test theory (e.g., Raven’s Progressive Matrices) but, to the best of our knowledge, no Rasch-scaled tests. These tests, however, meet stronger requirements on item homogeneity. In the present study, therefore, we will analyze data from 239 participants who have completed the Rasch-scaled Viennese Matrices Test (VMT). Applying a fixed-links modeling approach, we will test whether a position effect can be depicted as a latent variable and separated from a latent variable representing basic reasoning ability. The results have implications for the assumption of homogeneity in Rasch-homogeneous tests.