878 resultados para Accreditation: What It Is . . .and Is Not
Resumo:
In this study, I build upon my previous research in which I focus on religious doctrine as a gendered disciplinary apparatus, and examine the witch trials in early modem England and Italy in light of socio-economic issues relating to gender and class. This project examines the witch hunts/trials and early modem visual representations of witches, and what I suggest is an attempt to create docile bodies out of members of society who are deemed unruly, problematic and otherwise 'undesirable'; it is the witch's body that is deemed counternormative. This study demonstrates that it is neighbours and other acquaintances of accused witches that take on the role of the invisible guard of Bantham's Panoptic on. As someone who is trained in the study of English literature and literary theory, my approach is one that is informed by this methodology. It is my specialization in early modem British literature that first exposed me to witch-hunting manuals and tales of the supernatural, and it is for this reason that my research commences with a study of representations of witches and witchcraft in early modem England. From my initial exposure to such materials I proceed to examine the similarities and the differences of the cultural significance of the supernatural vis-a.-vis women's activities in early modem Italy. The subsequent discussion of visual representations of witches involves a predominance of Germanic artists, as the seminal work on the discernment of witches and the application of punishment known as the Malleus Meleficarum, was written in Germany circa 1486. Textual accounts of witch trials such as: "A Pitiless Mother (1616)," "The Wonderful Discovery of the Witchcrafts of Margaret and Philippa Flower (1619)," "Magic and Poison: The Trial ofChiaretta and Fedele (circa 1550)", and the "The Case of Benvegnuda Pincinella: Medicine Woman or Witch (1518),"and witchhunting manuals such as the Malleus Melejicarum and Strix will be put in direct dialogue with visual representations of witches in light of historical discourses pertaining to gender performance and gendered expectations. Issues relating to class will be examined as they pertain to the material conditions of presumed witches. The dominant group in any temporal or geographic location possesses the tools of representation. Therefore, it is not surprising that the physical characteristics, sexual habits and social material conditions that are attributed to suspected witches are attributes that can be deemed deviant by the ruling class. The research will juxtapose the social material conditions of suspected witches with the guilt, anxiety, and projection of fear that the dominant groups experienced in light of the changing economic landscape of the Renaissance. The shift from feudalism to primitive accumulation, and capitalism saw a rise in people living in poverty and therefore an increased dependence upon the good will of others. I will discuss the social material conditions of accused witches as informed by what Robyn Wiegman terms a "minoritizing discourse" (210). People of higher economic standing often blamed their social, medical, and/or economic difficulties on the less fortunate, resulting in accusations of witchcraft.
Resumo:
Objective: To investigate the impact of maternity insurance and maternal residence on birth outcomes in a Chinese population. Methods: Secondary data was analyzed from a perinatal cohort study conducted in the Beichen District of the city of Tianjin, China. A total of 2364 pregnant women participated in this study at approximately 12-week gestation upon registration for receiving prenatal care services. After accounting for missing information for relevant variables, a total of 2309 women with single birth were included in this analysis. Results: A total of 1190 (51.5%) women reported having maternity insurance, and 629 (27.2%) were rural residents. The abnormal birth outcomes were small for gestational age (SGA, n=217 (9.4%)), large for gestational age (LGA, n=248 (10.7%)), birth defect (n=48 (2.1%)) including congenital heart defect (n=32 (1.4%)). In urban areas, having maternal insurance increased the odds of SGA infants (1.32, 95%CI (0.85, 2.04), NS), but decreased the odds of LGA infants (0.92, 95%CI (0.62, 1.36), NS); also decreased the odds of birth defect (0.93, 95%CI (0.37, 2.33), NS), and congenital heart defect (0.65, 95%CI (0.21, 1.99), NS) after adjustment for covariates. In contrast to urban areas, having maternal insurance in rural areas reduced the odds of SGA infants (0.60, 95%CI (0.13, 2.73), NS); but increased the odds of LGA infants (2.16, 95%CI (0.92, 5.04), NS), birth defects (2.48, 95% CI (0.70, 8.80), NS), and congenital heart defect (2.18, 95%CI (0.48, 10.00), NS) after adjustment for the same covariates. Similar results were obtained from Bootstrap methods except that the odds ratio of LGA infants in rural areas for maternal insurance was significant (95%CI (1.13, 4.37)); urban residence was significantly related with lower odds of birth defect (95%CI (0.23, 0.89)) and congenital heart defect (95%CI (0.19, 0.91)). Conclusions: whether having maternal insurance did have an impact on perinatal outcomes, but the impact of maternal insurance on the perinatal outcomes showed differently between women with urban residence and women with rural residence status. However, it is not clear what are the reason causing the observed differences. Thus, more studies are needed.
Resumo:
In defending the principle of neutrality, liberals have often appealed to a more general moral principle that forbids coercing persons in the name of reasons those persons themselves cannot reasonably be expected to share. Yet liberals have struggled to articulate a non-arbitrary, non-dogmatic distinction between the reasons that persons can reasonably be expected to share and those they cannot. The reason for this, I argue, is that what it means to share a reason is itself obscure. In this paper I articulate two different conceptions of what it is to share a reason; I call these conceptions foundationalist and constructivist. On the foundationalist view, two people share a reason just in the sense that the same reason applies to each of them independently. On this view, I argue, debates about the reasons we share collapse into debates about the reasons we have, moving us no closer to an adequate defense of neutrality. On the constructivist view, by contrast, sharing reasons is understood as a kind of activity, and the reasons we must share are just those reasons that make this activity possible. I argue that the constructivist conception of sharing reasons yields a better defense of the principle of neutrality.
Resumo:
La dernire dcennie a connu un intrt croissant pour les problmes poss par les variables instrumentales faibles dans la littrature conomtrique, cest--dire les situations o les variables instrumentales sont faiblement corrles avec la variable instrumenter. En effet, il est bien connu que lorsque les instruments sont faibles, les distributions des statistiques de Student, de Wald, du ratio de vraisemblance et du multiplicateur de Lagrange ne sont plus standard et dpendent souvent de paramtres de nuisance. Plusieurs tudes empiriques portant notamment sur les modles de rendements lducation [Angrist et Krueger (1991, 1995), Angrist et al. (1999), Bound et al. (1995), Dufour et Taamouti (2007)] et dvaluation des actifs financiers (C-CAPM) [Hansen et Singleton (1982,1983), Stock et Wright (2000)], o les variables instrumentales sont faiblement corrles avec la variable instrumenter, ont montr que lutilisation de ces statistiques conduit souvent des rsultats peu fiables. Un remde ce problme est lutilisation de tests robustes lidentification [Anderson et Rubin (1949), Moreira (2002), Kleibergen (2003), Dufour et Taamouti (2007)]. Cependant, il nexiste aucune littrature conomtrique sur la qualit des procdures robustes lidentification lorsque les instruments disponibles sont endognes ou la fois endognes et faibles. Cela soulve la question de savoir ce qui arrive aux procdures dinfrence robustes lidentification lorsque certaines variables instrumentales supposes exognes ne le sont pas effectivement. Plus prcisment, quarrive-t-il si une variable instrumentale invalide est ajoute un ensemble dinstruments valides? Ces procdures se comportent-elles diffremment? Et si lendognit des variables instrumentales pose des difficults majeures linfrence statistique, peut-on proposer des procdures de tests qui slectionnent les instruments lorsquils sont la fois forts et valides? Est-il possible de proposer les produres de slection dinstruments qui demeurent valides mme en prsence didentification faible? Cette thse se focalise sur les modles structurels (modles quations simultanes) et apporte des rponses ces questions travers quatre essais. Le premier essai est publi dans Journal of Statistical Planning and Inference 138 (2008) 2649 2661. Dans cet essai, nous analysons les effets de lendognit des instruments sur deux statistiques de test robustes lidentification: la statistique dAnderson et Rubin (AR, 1949) et la statistique de Kleibergen (K, 2003), avec ou sans instruments faibles. Dabord, lorsque le paramtre qui contrle lendognit des instruments est fixe (ne dpend pas de la taille de lchantillon), nous montrons que toutes ces procdures sont en gnral convergentes contre la prsence dinstruments invalides (cest--dire dtectent la prsence dinstruments invalides) indpendamment de leur qualit (forts ou faibles). Nous dcrivons aussi des cas o cette convergence peut ne pas tenir, mais la distribution asymptotique est modifie dune manire qui pourrait conduire des distorsions de niveau mme pour de grands chantillons. Ceci inclut, en particulier, les cas o lestimateur des double moindres carrs demeure convergent, mais les tests sont asymptotiquement invalides. Ensuite, lorsque les instruments sont localement exognes (cest--dire le paramtre dendognit converge vers zro lorsque la taille de lchantillon augmente), nous montrons que ces tests convergent vers des distributions chi-carr non centres, que les instruments soient forts ou faibles. Nous caractrisons aussi les situations o le paramtre de non centralit est nul et la distribution asymptotique des statistiques demeure la mme que dans le cas des instruments valides (malgr la prsence des instruments invalides). Le deuxime essai tudie limpact des instruments faibles sur les tests de spcification du type Durbin-Wu-Hausman (DWH) ainsi que le test de Revankar et Hartley (1973). Nous proposons une analyse en petit et grand chantillon de la distribution de ces tests sous lhypothse nulle (niveau) et lalternative (puissance), incluant les cas o lidentification est dficiente ou faible (instruments faibles). Notre analyse en petit chantillon founit plusieurs perspectives ainsi que des extensions des prcdentes procdures. En effet, la caractrisation de la distribution de ces statistiques en petit chantillon permet la construction des tests de Monte Carlo exacts pour lexognit mme avec les erreurs non Gaussiens. Nous montrons que ces tests sont typiquement robustes aux intruments faibles (le niveau est contrl). De plus, nous fournissons une caractrisation de la puissance des tests, qui exhibe clairement les facteurs qui dterminent la puissance. Nous montrons que les tests nont pas de puissance lorsque tous les instruments sont faibles [similaire Guggenberger(2008)]. Cependant, la puissance existe tant quau moins un seul instruments est fort. La conclusion de Guggenberger (2008) concerne le cas o tous les instruments sont faibles (un cas dintrt mineur en pratique). Notre thorie asymptotique sous les hypothses affaiblies confirme la thorie en chantillon fini. Par ailleurs, nous prsentons une analyse de Monte Carlo indiquant que: (1) lestimateur des moindres carrs ordinaires est plus efficace que celui des doubles moindres carrs lorsque les instruments sont faibles et lendogenit modre [conclusion similaire celle de Kiviet and Niemczyk (2007)]; (2) les estimateurs pr-test bass sur les tests dexogent ont une excellente performance par rapport aux doubles moindres carrs. Ceci suggre que la mthode des variables instrumentales ne devrait tre applique que si lon a la certitude davoir des instruments forts. Donc, les conclusions de Guggenberger (2008) sont mitiges et pourraient tre trompeuses. Nous illustrons nos rsultats thoriques travers des expriences de simulation et deux applications empiriques: la relation entre le taux douverture et la croissance conomique et le problme bien connu du rendement lducation. Le troisime essai tend le test dexognit du type Wald propos par Dufour (1987) aux cas o les erreurs de la rgression ont une distribution non-normale. Nous proposons une nouvelle version du prcdent test qui est valide mme en prsence derreurs non-Gaussiens. Contrairement aux procdures de test dexognit usuelles (tests de Durbin-Wu-Hausman et de Rvankar- Hartley), le test de Wald permet de rsoudre un problme courant dans les travaux empiriques qui consiste tester lexognit partielle dun sous ensemble de variables. Nous proposons deux nouveaux estimateurs pr-test bass sur le test de Wald qui performent mieux (en terme derreur quadratique moyenne) que lestimateur IV usuel lorsque les variables instrumentales sont faibles et lendognit modre. Nous montrons galement que ce test peut servir de procdure de slection de variables instrumentales. Nous illustrons les rsultats thoriques par deux applications empiriques: le modle bien connu dquation du salaire [Angist et Krueger (1991, 1999)] et les rendements dchelle [Nerlove (1963)]. Nos rsultats suggrent que lducation de la mre expliquerait le dcrochage de son fils, que loutput est une variable endogne dans lestimation du cot de la firme et que le prix du fuel en est un instrument valide pour loutput. Le quatrime essai rsout deux problmes trs importants dans la littrature conomtrique. Dabord, bien que le test de Wald initial ou tendu permette de construire les rgions de confiance et de tester les restrictions linaires sur les covariances, il suppose que les paramtres du modle sont identifis. Lorsque lidentification est faible (instruments faiblement corrls avec la variable instrumenter), ce test nest en gnral plus valide. Cet essai dveloppe une procdure dinfrence robuste lidentification (instruments faibles) qui permet de construire des rgions de confiance pour la matrices de covariances entre les erreurs de la rgression et les variables explicatives (possiblement endognes). Nous fournissons les expressions analytiques des rgions de confiance et caractrisons les conditions ncessaires et suffisantes sous lesquelles ils sont borns. La procdure propose demeure valide mme pour de petits chantillons et elle est aussi asymptotiquement robuste lhtroscdasticit et lautocorrlation des erreurs. Ensuite, les rsultats sont utiliss pour dvelopper les tests dexognit partielle robustes lidentification. Les simulations Monte Carlo indiquent que ces tests contrlent le niveau et ont de la puissance mme si les instruments sont faibles. Ceci nous permet de proposer une procdure valide de slection de variables instrumentales mme sil y a un problme didentification. La procdure de slection des instruments est base sur deux nouveaux estimateurs pr-test qui combinent lestimateur IV usuel et les estimateurs IV partiels. Nos simulations montrent que: (1) tout comme lestimateur des moindres carrs ordinaires, les estimateurs IV partiels sont plus efficaces que lestimateur IV usuel lorsque les instruments sont faibles et lendognit modre; (2) les estimateurs pr-test ont globalement une excellente performance compars lestimateur IV usuel. Nous illustrons nos rsultats thoriques par deux applications empiriques: la relation entre le taux douverture et la croissance conomique et le modle de rendements lducation. Dans la premire application, les tudes antrieures ont conclu que les instruments ntaient pas trop faibles [Dufour et Taamouti (2007)] alors quils le sont fortement dans la seconde [Bound (1995), Doko et Dufour (2009)]. Conformment nos rsultats thoriques, nous trouvons les rgions de confiance non bornes pour la covariance dans le cas o les instruments sont assez faibles.
Resumo:
Malgr de nombreuses tudes qui soutiennent l'ide que les enfants ayant vcu la rupture de leurs parents rencontrent un plus haut niveau de difficults affectives et comportementales que les enfants de familles intactes, certaines questions restent claircir. Notamment, les donnes empiriques existantes ne conduisent pas des conclusions prcises quant au moment exact de lapparition de ces difficults. De plus, ce n'est pas clair si ces difficults sont associes la sparation en soi, ou bien d'autres facteurs lis la sparation. Cette thse est constitue de deux articles empiriques. Le premier examine ladaptation de lenfant avant et aprs la sparation en fonction du sexe et de l'ge au moment de la sparation. Le second article prsente une tude qui a pour objectif de dpartager limportance des facteurs parentaux et contextuels et celle de la sparation parentale pour expliquer ladaptation de lenfant. Les participants proviennent de l'tude Longitudinale du Dveloppement des Enfants du Qubec (LDEQ, 1998-2006). chaque enqute de l'LDEQ, une entrevue structure ralise auprs de la mre a permis d'valuer les niveaux dhyperactivit/impulsivit, danxit et dagressivit physique de lenfant. Pendant cette entrevue, les mres ont galement rpondu des questions sur la qualit de leurs pratiques parentales et sur le revenu du mnage. Finalement, un questionnaire auto-administr la mre a permis d'valuer ses propres symptmes de dpression et d'anxit. La premire tude inclus 143 enfants de familles spares et 1705 enfants de familles intactes. Deux sous-groupes ont t crs selon que lenfant ait vcu la sparation entre 2 et 4 ans, ou entre 4 et 6 ans. Ladaptation de l'enfant a t value un temps de mesure avant la sparation et deux temps de mesure aprs la sparation. Les rsultats de cette premire tude dmontrent quavant la sparation, les enfants de familles intactes et spares ne se distinguent pas significativement quant leurs niveaux dhyperactivit/impulsivit et danxit. Par contre, ces difficults deviennent significativement plus leves chez les enfants de familles spares aprs la rupture des parents. Dautres parts, le niveau dagressivit physique est plus lev chez les enfants de la sparation indpendamment du temps de mesure. Finalement, les diffrences entre les deux groupes denfants ne dpendent pas du sexe ou de lge au moment de la sparation. La deuxime tude inclus 358 enfants de 8 ans qui ont vcu la sparation de leurs parents, et 1065 enfants du mme ge provenant de familles intactes. Aprs avoir contrl pour le sexe de lenfant, les rsultats ont dmontr que lorsquon tient compte de la contribution des symptmes maternels de dpression et d'anxit, de la qualit des pratiques parentales et du revenu du mnage dans ladaptation de lenfant, la sparation parentale ne demeurent plus lie aux niveaux danxit et d'agressivit physique de lenfant. Par contre, la relation entre la sparation parentale et lhyperactivit/impulsivit de lenfant demeure significative. Les rsultats prsents dans les articles sont discuts ainsi que leurs implications.
Resumo:
La prsente tude conduit les traditions fragmentes de la culture littraire de Trieste vers les proccupations contemporaines de la littrature mondiale lpoque actuelle o la mondialisation est largement perue comme le paradigme historique prdominant de la modernit. Ce que jappelle la littrature globalise renvoie la refonte de la Weltliteratur envisage par Goethe et traduite comme world literature ou la littrature universelle par des discours sur la culture mondiale et le post-nationalisme. Cependant, lorsque les tudes littraires posent les questions de la littrature globalise , elles sont confrontes un problme : le passage de lide universelle inhrente au paradigme de Goethe entre le Scylla dun internationalisme relativiste et occidental, et le Charybde dun mondialisme atopique et dshumanis. Les spcialistes de la littrature mondiale qui tendent vers la premire position acquirent un fondement institutionnel en travaillant avec lhypothse implicite selon laquelle les nations sont fondes sur les langues nationales, ce qui souscrit la relation entre la littrature mondiale et les littratures nationales. Luniversalit de cette hypothse implicite est rfute par lcriture triestine. Dans cette tude, je soutiens que lcriture triestine du dbut du XXe sicle agit comme un prcurseur de la rflexion sur la culture littraire globalise du XXIe sicle. Elle dispose de sa propre conomie de sens, de sorte quelle nentre pas dans les nationalismes littraires, mais elle ne tombe pas non plus dans le mondialisme atopique. Elle nest pas catgoriquement oppose la littrature nationale; mais elle ne permet pas aux traditions nationales de prendre racine. Les crivains de Triestine exprimaient le dsir dun sentiment dunit et dappartenance, ainsi que celui dune conscience critique qui dissout ce dsir. Ils rsistaient lidalisation de ces particularismes et nont jamais russi raliser la coalescence de ses crits dans une tradition littraire unifie. Par consquent, Trieste a souvent t considre comme un non-lieu et sa littrature comme une anti-littrature. En contournant les impratifs territoriaux de la tradition nationale italienne comme il est illustr par le cas de Italo Svevo lcriture triestine a t ultrieurement incluse dans les paramtres littraires et culturels de la Mitteleuropa, o son expression a t imagine comme un microcosme de la pluralit supranationale de lancien Empire des Habsbourg. Toutefois, le macrocosme projet de Trieste nest pas une image unifie, comme le serait un globe; mais il est plutt une nbuleuse plantaire selon limage de Svevo o aucune idalisation universalisante ne peut se raliser. Cette tude interroge limage de la ville comme un microcosme et comme un non-lieu, comme cela se rapporte au macrocosme des atopies de la mondialisation, afin de dmontrer que lcriture de Trieste est la littrature globalise avant la lettre. La dialectique non rsolue entre faire et dfaire la langue littraire et lidentit travers lcriture anime la culture littraire de Trieste, et son dynamisme contribue aux dbats sur la mondialisation et les questions de la culture en dcoulant. Cette tude de lcriture triestine offre des perspectives critiques sur ltat des littratures canoniques dans un monde o les frontires disparaissent et les non-lieux se multiplient. Limage de la nbuleuse plantaire devient possiblement celle dun archtype pour le monde globalis daujourdhui.
Resumo:
According to the deontological conception of epistemic justification, a belief is justified when it is our obligation or duty as rational creatures to believe it. However, this view faces an important objection according to which we cannot have such epistemic obligations since our beliefs are never under our voluntary control. One possible strategy against this argument is to show that we do have voluntary control over some of our beliefs, and that we therefore have epistemic obligations. This is what I call the voluntarist strategy. I examine it and argue that it is not promising. I show how the voluntarist attempts of Carl Ginet and Brian Weatherson fail, and conclude that it would be more fruitful for deontologists to look for a different strategy.
Resumo:
Semantic Web Mining aims at combining the two fast-developing research areas Semantic Web and Web Mining. This survey analyzes the convergence of trends from both areas: Growing numbers of researchers work on improving the results of Web Mining by exploiting semantic structures in the Web, and they use Web Mining techniques for building the Semantic Web. Last but not least, these techniques can be used for mining the Semantic Web itself. The second aim of this paper is to use these concepts to circumscribe what Web space is, what it represents and how it can be represented and analyzed. This is used to sketch the role that Semantic Web Mining and the software agents and human agents involved in it can play in the evolution of Web space.
Resumo:
An introduction to Vim and why I use it. This resource is the precursor to a technical walk through and code along using vim. During the talk I handed round a cheat sheet for vim which can be found at http://www.tuxfiles.org/linuxhelp/vimcheat.html You can find full documentation and a lot more indepth examples in the vim documentation: http://vimdoc.sourceforge.net/htmldoc/help.html
Resumo:
An emerging consensus in cognitive science views the biological brain as a hierarchically-organized predictive processing system. This is a system in which higher-order regions are continuously attempting to predict the activity of lower-order regions at a variety of (increasingly abstract) spatial and temporal scales. The brain is thus revealed as a hierarchical prediction machine that is constantly engaged in the effort to predict the flow of information originating from the sensory surfaces. Such a view seems to afford a great deal of explanatory leverage when it comes to a broad swathe of seemingly disparate psychological phenomena (e.g., learning, memory, perception, action, emotion, planning, reason, imagination, and conscious experience). In the most positive case, the predictive processing story seems to provide our first glimpse at what a unified (computationally-tractable and neurobiological plausible) account of human psychology might look like. This obviously marks out one reason why such models should be the focus of current empirical and theoretical attention. Another reason, however, is rooted in the potential of such models to advance the current state-of-the-art in machine intelligence and machine learning. Interestingly, the vision of the brain as a hierarchical prediction machine is one that establishes contact with work that goes under the heading of 'deep learning'. Deep learning systems thus often attempt to make use of predictive processing schemes and (increasingly abstract) generative models as a means of supporting the analysis of large data sets. But are such computational systems sufficient (by themselves) to provide a route to general human-level analytic capabilities? I will argue that they are not and that closer attention to a broader range of forces and factors (many of which are not confined to the neural realm) may be required to understand what it is that gives human cognition its distinctive (and largely unique) flavour. The vision that emerges is one of 'homomimetic deep learning systems', systems that situate a hierarchically-organized predictive processing core within a larger nexus of developmental, behavioural, symbolic, technological and social influences. Relative to that vision, I suggest that we should see the Web as a form of 'cognitive ecology', one that is as much involved with the transformation of machine intelligence as it is with the progressive reshaping of our own cognitive capabilities.
Resumo:
This text deals with a part of the research-action Multifamiliar groups with sexual offender adolescents and emphasizes the written production of the adolescents during the process. two texts were involved: the first one was a letter addressed to the parents and the second was an evaluation of the multifamiliar group. Seven adolescents between 14 and 17 years old wrote the texts. About the first text we discussed: the adolescent as responsible for the domestic work; the adolescent and his need to receive support and protection and the adolescent and the recognition of his growth phase. about the second text: the feelings of the adolescent regarding sexual abuse; the relation with the institution that carries out the intervention; the symbols that identify his ambivalence. We comment about the sexual abuse practiced by these adolescents as fromtwo main points: the adolescence seen as a development phase and the role played by the family in the conduction of this phase.
Resumo:
Each time more, museology professionals are confronted with terms such as community, social inequality, social inclusion and development in their quotidian. Be it in conferences, publications or museum programmes, these are increasingly recurrent terms which, in great part, translate the dynamics of a relationship between museology and community development that has been constructed since the late 60s. Although it is not new, such relationship has gone through a major bloom in the early 90s and arrives today as an emerging priority within the world of museology. A first glance on the subject reveals that very different approaches and forms of action share the efforts in endowing museology with a role in community development today. In addition, despite of its growing popularity, it seems to be some misunderstandings on what the work with community development requires and truly signifies, as can be pointed out in a number of assertions originated from the field of museology. Accompanying such a plural environment, discussions and disagreements about to what extend museology is able to claim a role in social change also mark its affairs with community development. People are faced, indeed, with a rather polemic and intricate scenario. To a great extend, language barriers hinder the exchange of information on current initiatives and previous experiences, as well as on the development of concepts, approaches and proposals. Lack of better interactions among the groups of museology professionals and social actors who carry out different works with community development also contributes to making the potential of museology as a resource for development more difficult to be visualised.
Resumo:
Particle size distribution (psd) is one of the most important features of the soil because it affects many of its other properties, and it determines how soil should be managed. To understand the properties of chalk soil, psd analyses should be based on the original material (including carbonates), and not just the acid-resistant fraction. Laser-based methods rather than traditional sedimentation methods are being used increasingly to determine particle size to reduce the cost of analysis. We give an overview of both approaches and the problems associated with them for analyzing the psd of chalk soil. In particular, we show that it is not appropriate to use the widely adopted 8 pm boundary between the clay and silt size fractions for samples determined by laser to estimate proportions of these size fractions that are equivalent to those based on sedimentation. We present data from field and national-scale surveys of soil derived from chalk in England. Results from both types of survey showed that laser methods tend to over-estimate the clay-size fraction compared to sedimentation for the 8 mu m clay/silt boundary, and we suggest reasons for this. For soil derived from chalk, either the sedimentation methods need to be modified or it would be more appropriate to use a 4 pm threshold as an interim solution for laser methods. Correlations between the proportions of sand- and clay-sized fractions, and other properties such as organic matter and volumetric water content, were the opposite of what one would expect for soil dominated by silicate minerals. For water content, this appeared to be due to the predominance of porous, chalk fragments in the sand-sized fraction rather than quartz grains, and the abundance of fine (<2 mu m) calcite crystals rather than phyllosilicates in the clay-sized fraction. This was confirmed by scanning electron microscope (SEM) analyses. "Of all the rocks with which 1 am acquainted, there is none whose formation seems to tax the ingenuity of theorists so severely, as the chalk, in whatever respect we may think fit to consider it". Thomas Allan, FRS Edinburgh 1823, Transactions of the Royal Society of Edinburgh. (C) 2009 Natural Environment Research Council (NERC) Published by Elsevier B.V. All rights reserved.