896 resultados para Number of non-conformities of the inspected item
Resumo:
The objective of this thesis is to study the distribution of the number of principal ideals generated by an irreducible element in an algebraic number field, namely in the non-unique factorization ring of integers of such a field. In particular we are investigating the size of M(x), defined as M ( x ) =∑ (α) α irred.|N (α)|≤≠ 1, where x is any positive real number and N (α) is the norm of α. We finally obtain asymptotic results for hl(x).
Resumo:
The high-altitude lake Tso Moriri (32°55'46'' N, 78°19'24'' E; 4522 m a.s.l.) is situated at the margin of the ISM and westerly influences in the Trans-Himalayan region of Ladakh. Human settlements are rare and domestic and wild animals are concentrating at the alpine meadows. A set of modern surface samples and fossil pollen from deep-water TMD core was evaluated with a focus on indicator types revealing human impact, grazing activities and lake system development during the last ca. 12 cal ka BP. Furthermore, the non-pollen palynomorph (NPP) record, comprising remains of limnic algae and invertebrates as well as fungal spores and charred plant tissue fragments, were examined in order to attest palaeolimnic phases and human impact, respectively. Changes in the early and middle Holocene limnic environment are mainly influenced by regional climatic conditions and glacier-fed meltwater flow in the catchment area. The NPP record indicates low lake productivity with high influx of freshwater between ca. 11.5 and 4.5 cal ka BP which is in agreement with the regional monsoon dynamics and published climate reconstructions. Geomorphologic observations suggest that during this period of enhanced precipitation the lake had a regular outflow and contributed large amounts of water to the Sutlej River, the lower reaches of which were integral part of the Indus Civilization area. The inferred minimum fresh water input and maximum lake productivity between ca. 4.5-1.8 cal ka BP coincides with the reconstruction of greatest aridity and glaciation in the Korzong valley resulting in significantly reduced or even ceased outflow. We suggest that lowered lake levels and river discharge on a larger regional scale may have caused irrigation problems and harvest losses in the Indus valley and lowlands occupied by sedentary agricultural communities. This scenario, in turn, supports the theory that, Mature Harappan urbanism (ca. 4.5-3.9 cal ka BP) emerged in order to facilitate storage, protection, administration, and redistribution of crop yields and secondly, the eventual collapse of the Harappan Culture (ca. 3.5-3 cal ka BP) was promoted by prolonged aridity. There is no clear evidence for human impact around Tso Moriri prior to ca. 3.7 cal ka BP, with a more distinct record since ca. 2.7 cal ka BP. This suggests that the sedimentary record from Tso Moriri primarily archives the regional climate history.
Resumo:
We develop general closed-form expressions for the mutual gravitational potential, resultant and torque acting upon a rigid tethered system moving in a non-uniform gravity field produced by an attracting body with revolution symmetry, such that an arbitrary number of zonal harmonics is considered. The final expressions are series expansion in two small parameters related to the reference radius of the primary and the length of the tether, respectively, each of which are scaled by the mutual distance between their centers of mass. A few numerical experiments are performed to study the convergence behavior of the final expressions, and conclude that for high precision applications it might be necessary to take into account additional perturbation terms, which come from the mutual Two-Body interaction.
Resumo:
The purposes of this study were (1) to validate of the item-attribute matrix using two levels of attributes (Level 1 attributes and Level 2 sub-attributes), and (2) through retrofitting the diagnostic models to the mathematics test of the Trends in International Mathematics and Science Study (TIMSS), to evaluate the construct validity of TIMSS mathematics assessment by comparing the results of two assessment booklets. Item data were extracted from Booklets 2 and 3 for the 8th grade in TIMSS 2007, which included a total of 49 mathematics items and every student's response to every item. The study developed three categories of attributes at two levels: content, cognitive process (TIMSS or new), and comprehensive cognitive process (or IT) based on the TIMSS assessment framework, cognitive procedures, and item type. At level one, there were 4 content attributes (number, algebra, geometry, and data and chance), 3 TIMSS process attributes (knowing, applying, and reasoning), and 4 new process attributes (identifying, computing, judging, and reasoning). At level two, the level 1 attributes were further divided into 32 sub-attributes. There was only one level of IT attributes (multiple steps/responses, complexity, and constructed-response). Twelve Q-matrices (4 originally specified, 4 random, and 4 revised) were investigated with eleven Q-matrix models (QM1 ~ QM11) using multiple regression and the least squares distance method (LSDM). Comprehensive analyses indicated that the proposed Q-matrices explained most of the variance in item difficulty (i.e., 64% to 81%). The cognitive process attributes contributed to the item difficulties more than the content attributes, and the IT attributes contributed much more than both the content and process attributes. The new retrofitted process attributes explained the items better than the TIMSS process attributes. Results generated from the level 1 attributes and the level 2 attributes were consistent. Most attributes could be used to recover students' performance, but some attributes' probabilities showed unreasonable patterns. The analysis approaches could not demonstrate if the same construct validity was supported across booklets. The proposed attributes and Q-matrices explained the items of Booklet 2 better than the items of Booklet 3. The specified Q-matrices explained the items better than the random Q-matrices.
Resumo:
Purpose: Citations received by papers published within a journal serve to increase its bibliometric impact. The objective of this paper was to assess the influence of publication language, article type, number of authors, and year of publication on the citations received by papers published in Gaceta Sanitaria, a Spanish-language journal of public health. Methods: The information sources were the journal website and the Web of Knowledge, of the Institute of Scientific Information. The period analyzed was from 2007 to 2010. We included original articles, brief original articles, and reviews published within that period. We extracted manually information regarding the variables analyzed and we also differentiated among total citations and self-citations. We constructed logistic regression models to analyze the probability of a Gaceta Sanitaria paper to be cited or not, taking into account the aforementioned independent variables. We also analyzed the probability of receiving citations from non-Spanish authors. Results: Two hundred forty papers fulfilled the inclusion criteria. The included papers received a total of 287 citations, which became 202 when excluding self-citations. The only variable influencing the probability of being cited was the publication year. After excluding never cited papers, time since publication and review papers had the highest probabilities of being cited. Papers in English and review articles had a higher probability of citation from non-Spanish authors. Conclusions: Publication language has no influence on the citations received by a national, non-English journal. Reviews in English have the highest probability of citation from abroad. Editors should decide how to manage this information when deciding policies to raise the bibliometric impact factor of their journals.
Resumo:
Array measurements have become a valuable tool for site response characterization in a non-invasive way. The array design, i.e. size, geometry and number of stations, has a great influence in the quality of the obtained results. From the previous parameters, the number of available stations uses to be the main limitation for the field experiments, because of the economical and logistical constraints that it involves. Sometimes, from the initially planned array layout, carefully designed before the fieldwork campaign, one or more stations do not work properly, modifying the prearranged geometry. Whereas other times, there is not possible to set up the desired array layout, because of the lack of stations. Therefore, for a planned array layout, the number of operative stations and their arrangement in the array become a crucial point in the acquisition stage and subsequently in the dispersion curve estimation. In this paper we carry out an experimental work to analyze which is the minimum number of stations that would provide reliable dispersion curves for three prearranged array configurations (triangular, circular with central station and polygonal geometries). For the optimization study, we analyze together the theoretical array responses and the experimental dispersion curves obtained through the f-k method. In the case of the f-k method, we compare the dispersion curves obtained for the original or prearranged arrays with the ones obtained for the modified arrays, i.e. the dispersion curves obtained when a certain number of stations n is removed, each time, from the original layout of X geophones. The comparison is evaluated by means of a misfit function, which helps us to determine how constrained are the studied geometries by stations removing and which station or combination of stations affect more to the array capability when they are not available. All this information might be crucial to improve future array designs, determining when it is possible to optimize the number of arranged stations without losing the reliability of the obtained results.
Resumo:
v.15:no.1(1965)
Resumo:
Includes citations for 703 funds, of which 568 are appropriated and 135 are non-appropriated funds for FY 2005 and 2006.
Resumo:
Stacy A. Lischka & William L. Anderson, principal investigators.
Resumo:
Item 1038-A, 1038-B (MF)
Resumo:
The Center for Epidemiologic Studies Depression Scale (CES-D) is frequently used in epidemiological surveys to screen for depression, especially among older adults. This article addresses the problem of non-completion of a short form of the CES-D (CESD-10) in a mailed survey of 73- to 78-year-old women enrolled in the Australian Longitudinal Study on Women's Health. Completers of the CESD-10 had more education, found it easier to manage on available income and reported better physical and mental health. The Medical Outcomes Study Short Form Health Survey (SF-36) scores for non-completers were intermediate between those for women classified as depressed and not depressed using the CESD-10. Indicators of depression had an inverted U-shaped relationship with the number of missing CESD- 10 items and were most frequent for women with two to seven items missing. Future research should pay particular attention to the level of missing data in depression scales and report its potential impact on estimates of depression.
Resumo:
Mineral wool insulation material applied to the primary cooling circuit of a nuclear reactor maybe damaged in the course of a loss of coolant accident (LOCA). The insulation material released by the leak may compromise the operation of the emergency core cooling system (ECCS), as it maybe transported together with the coolant in the form of mineral wool fiber agglomerates (MWFA) suspensions to the containment sump strainers, which are mounted at the inlet of the ECCS to keep any debris away from the emergency cooling pumps. In the further course of the LOCA, the MWFA may block or penetrate the strainers. In addition to the impact of MWFA on the pressure drop across the strainers, corrosion products formed over time may also accumulate in the fiber cakes on the strainers, which can lead to a significant increase in the strainer pressure drop and result in cavitation in the ECCS. Therefore, it is essential to understand the transport characteristics of the insulation materials in order to determine the long-term operability of nuclear reactors, which undergo LOCA. An experimental and theoretical study performed by the Helmholtz-Zentrum Dresden-Rossendorf and the Hochschule Zittau/Görlitz1 is investigating the phenomena that maybe observed in the containment vessel during a primary circuit coolant leak. The study entails the generation of fiber agglomerates, the determination of their transport properties in single and multi-effect experiments and the long-term effects that particles formed due to corrosion of metallic containment internals by the coolant medium have on the strainer pressure drop. The focus of this presentation is on the numerical models that are used to predict the transport of MWFA by CFD simulations. A number of pseudo-continuous dispersed phases of spherical wetted agglomerates can represent the MWFA. The size, density, the relative viscosity of the fluid-fiber agglomerate mixture and the turbulent dispersion all affect how the fiber agglomerates are transported. In the cases described here, the size is kept constant while the density is modified. This definition affects both the terminal velocity and volume fraction of the dispersed phases. Only one of the single effect experimental scenarios is described here that are used in validation of the numerical models. The scenario examines the suspension and horizontal transport of the fiber agglomerates in a racetrack type channel. The corresponding experiments will be described in an accompanying presentation (see abstract of Seeliger et al.).
Resumo:
The Securities and Exchange Commission (SEC) in the United States and in particular its immediately past chairman, Christopher Cox, has been actively promoting an upgrade of the EDGAR system of disseminating filings. The new generation of information provision has been dubbed by Chairman Cox, "Interactive Data" (SEC, 2006). In October this year the Office of Interactive Disclosure was created(http://www.sec.gov/news/press/2007/2007-213.htm). The focus of this paper is to examine the way in which the non-professional investor has been constructed by various actors. We examine the manner in which Interactive Data has been sold as the panacea for financial market 'irregularities' by the SEC and others. The academic literature shows almost no evidence of researching non-professional investors in any real sense (Young, 2006). Both this literature and the behaviour of representatives of institutions such as the SEC and FSA appears to find it convenient to construct this class of investor in a particular form and to speak for them. We theorise the activities of the SEC and its chairman in particular over a period of about three years, both following and prior to the 'credit crunch'. Our approach is to examine a selection of the policy documents released by the SEC and other interested parties and the statements made by some of the policy makers and regulators central to the programme to advance the socio-technical project that is constituted by Interactive Data. We adopt insights from ANT and more particularly the sociology of translation (Callon, 1986; Latour, 1987, 2005; Law, 1996, 2002; Law & Singleton, 2005) to show how individuals and regulators have acted as spokespersons for this malleable class of investor. We theorise the processes of accountability to investors and others and in so doing reveal the regulatory bodies taking the regulated for granted. The possible implications of technological developments in digital reporting have been identified also by the CEO's of the six biggest audit firms in a discussion document on the role of accounting information and audit in the future of global capital markets (DiPiazza et al., 2006). The potential for digital reporting enabled through XBRL to "revolutionize the entire company reporting model" (p.16) is discussed and they conclude that the new model "should be driven by the wants of investors and other users of company information,..." (p.17; emphasis in the original). Here rather than examine the somewhat illusive and vexing question of whether adding interactive functionality to 'traditional' reports can achieve the benefits claimed for nonprofessional investors we wish to consider the rhetorical and discursive moves in which the SEC and others have engaged to present such developments as providing clearer reporting and accountability standards and serving the interests of this constructed and largely unknown group - the non-professional investor.
Resumo:
Undergraduate programmes on construction management and other closely related built environment disciplines are currently taught and assessed on a modular basis. This is the case in the UK and in many other countries globally. However, it can be argued that professionally oriented programmes like these are better assessed on a non-modular basis, in order to produce graduates who can apply knowledge on different subject contents in cohesion to solve complex practical scenarios in their work environments. The examples of medical programmes where students are assessed on a non-modular basis can be cited as areas where this is already being done. A preliminary study was undertaken to explore the applicability of non-modular assessment within construction management undergraduate education. A selected sample of university academics was interviewed to gather their perspectives on applicability of non-modular assessment. General acceptance was observed among the academics involved that integrating non-modular assessment is applicable and will be beneficial. All academics stated that at least some form of non-modular assessment as being currently used in their programmes. Examples where cross-modular knowledge is assessed included comprehensive/multi-disciplinary project modules and creating larger modules to amalgamate a number of related subject areas. As opposed to a complete shift from modular to non-modular, an approach where non-modular assessment is integrated and its use further expanded within the current system is therefore suggested. This is due to the potential benefits associated with this form of assessment to professionally aligned built environment programmes
Resumo:
Current interest in measuring quality of life is generating interest in the construction of computerized adaptive tests (CATs) with Likert-type items. Calibration of an item bank for use in CAT requires collecting responses to a large number of candidate items. However, the number is usually too large to administer to each subject in the calibration sample. The concurrent anchor-item design solves this problem by splitting the items into separate subtests, with some common items across subtests; then administering each subtest to a different sample; and finally running estimation algorithms once on the aggregated data array, from which a substantial number of responses are then missing. Although the use of anchor-item designs is widespread, the consequences of several configuration decisions on the accuracy of parameter estimates have never been studied in the polytomous case. The present study addresses this question by simulation, comparing the outcomes of several alternatives on the configuration of the anchor-item design. The factors defining variants of the anchor-item design are (a) subtest size, (b) balance of common and unique items per subtest, (c) characteristics of the common items, and (d) criteria for the distribution of unique items across subtests. The results of this study indicate that maximizing accuracy in item parameter recovery requires subtests of the largest possible number of items and the smallest possible number of common items; the characteristics of the common items and the criterion for distribution of unique items do not affect accuracy.