918 resultados para election observers
Resumo:
This publication contains all election laws either included in the 2011 Iowa Code or to be included in the 2011 Iowa Code Supplement. The Supplement will contain all enactments from 2011 and earlier years that are made effective in 2011 or on January 1, 2012. Changes in Code language to be included in the 2011 Iowa Code Supplement are marked by highlighting in yellow. Code sections with changes are also highlighted in yellow in the Table of Contents. In previous editions of this publication some Code sections were not printed in their entirety, with only the portion relating to election law printed. This year all Code sections relating to election law are printed in their entirety, including the Code section’s complete history and footnotes. DISCLAIMER This document is not an official legal publication of the state of Iowa. For the official publication of the Iowa Acts and the Iowa Code, see those publications. (2011 Iowa Code §2B.17)
Resumo:
In this article we propose a model to explain how voters' perceptions of their ideological proximity to a party affect their propensity to vote for that party. We argue that political knowledge plays a crucial moderating role in the relationship between party proximity and voting propensity. It is necessary, however, to distinguish between institutional knowledge (information about the political system) and party knowledge (information about the parties' left-right positions). An analysis of survey data from the 2007 Swiss federal elections supports our main hypothesis that party knowledge enhances the link between party proximity and voting propensity. Institutional knowledge may have additional influence, but clear evidence for this effect was obtained only for propensities to vote for the Swiss People's Party (SVP). Overall, the impact of political knowledge was found to be substantial, even after controlling for the outstanding influence of party identification and other predictors of voting propensities
Resumo:
Political participation is often very low in Switzerland especially among students and young citizens. In the run-up to the Swiss parliamentary election in October 2007 several online tools and campaigns were developed with the aim to increase not only the level of information about the political programs of parties and candidates, but also the electoral participation of younger citizens. From a practical point of view this paper will describe the development, marketing efforts and the distribution as well as the use of two of these tools : the so-called "Parteienkompass" (party compass) and the "myVote"-tool - an online voting assistance tool based on an issue-matching system comparing policy preferences between voters and candidates on an individual level. We also havea look at similar tools stemming from Voting Advice Applications (VAA) in other countries in Western Europe. The paper closes with the results of an evaluation and an outlook to further developments and on-going projects in the near future in Switzerland.
Resumo:
"Thou shalt not bear false witness," as we all know. Yet changing one's mind in case of respectable reasons seems to be allowed. Which is good news for politicians, but reduces the effectiveness of prospective voting, i.e. the focus on "the commitments of candidates to take actions that citizens desire to be taken" (Powell 2000: 9). This may be bad news for voters. By comparing pre-election commitments of Swiss members of parliament (MPs) with actual voting behaviour in the lower house of parliament, the following article explores the question how much confidence voters can have in prospective voting and what factors explain (non-)fulfilment of election pledges.
Resumo:
For landline telephone surveys in particular, undercoverage has been a growing problem. However, research regarding the relative contributions of socio-demographic bias and other composition effects is scarce. We propose to address this issue by analyzing an election survey which used a sample from a register-based sampling frame containing basic socio-demographic information and to which telephone numbers were subsequently matched. With respect to socio-demographic representation of the final sample, we find that difficult to match groups are also difficult to contact, while those who cooperate tend to have different characteristics. We find bias due to undercoverage to be of greater magnitude than noncontact bias, while noncooperation falls between the two. As for substantive variables, both additional efforts to match missing telephone numbers and the construction of better weights are successful in closing the gap between survey estimates of voting behavior and true values from the election results.
Resumo:
One stream of leadership theory suggests leaders are evaluated via inferential observer processes that compare the fit of the target to a prototype of an ideal (charismatic) leader. Attributional theories of leadership suggest that evaluations depend on knowledge of past organizational performance, which is attributed to the leader's skills. We develop a novel theory showing how inferential and attributional processes simultaneously explain top-level leader evaluation and ultimately leader retention and selection. We argue that observers will mostly rely on attributional mechanisms when performance signals clearly indicate good or poor performance outcomes. However, under conditions of attributional ambiguity (i.e., when performance signals are unclear), observers will mostly rely on inferential processes. In Study 1 we tested our theory in an unconventional context-the U.S. presidential election-and found that the two processes, due to the leader's charisma and country economic performance, interact in predicting whether a leader is selected. Using a business context and an experimental design, in Study 2 we show that CEO charisma and firm performance interact in predicting leader retention, confirming the results we found in Study 1. Our results suggest that this phenomenon is quite general and can apply to various performance domains.
Resumo:
The topic of this study is the language of the educational policies of the British Labour party in the General Election manifestos between the years 1983-2005. The twenty-year period studied has been a period of significant changes in world politics, and in British politics, especially for the Labour party. The emergence educational policy as a vote-winner of the manifestos of the nineties has been noteworthy. The aim of the thesis is two-fold: to look at the structure of the political manifesto as an example of genre writing and to analyze the content utilizing the approach of critical discourse analysis. Furthermore, the aim of this study is not to pinpoint policy positions but to look at what is the image that the Labour Party creates of itself through these manifestos. The analysis of the content is done by a method of close-reading. Based on the findings, the methodology for the analysis of the content was created. This study utilized methodological triangulation which means that the material is analyzed from several methodological aspects. The aspects used in this study are ones of lexical features (collocation, coordination, euphemisms, metaphors and naming), grammatical features (thematic roles, tense, aspect, voice and modal auxiliaries) and rhetoric (Burke, Toulmin and Perelman). From the analysis of the content a generic description is built. By looking at the lexical, grammatical and rhetorical features a clear change in language of the Labour Party can be detected. This change is foreshadowed already in the 1992 manifesto but culminates in the 1997 manifesto which would lead Labour to a landslide victory in the General Election. During this twenty-year period Labour has moved away from the old commitments and into the new sphere of “something for everybody”. The pervasiveness of promotional language and market inspired vocabulary into the sphere of manifesto writing is clear. The use of the metaphors seemed to be the tool for the creation of the image of the party represented through the manifestos. A limited generic description can be constructed from the findings based on the content and structure of the manifestos: especially more generic findings such as the use of the exclusive we, the lack of certain anatomical parts of argument structure, the use of the future tense and the present progressive aspect can shed light to the description of the genre of manifesto writing. While this study is only the beginning, it proves that the combination of looking at the lexical, grammatical and rhetorical features in the study of manifestos is a promising one.
Resumo:
Evaluation of image quality (IQ) in Computed Tomography (CT) is important to ensure that diagnostic questions are correctly answered, whilst keeping radiation dose to the patient as low as is reasonably possible. The assessment of individual aspects of IQ is already a key component of routine quality control of medical x-ray devices. These values together with standard dose indicators can be used to give rise to 'figures of merit' (FOM) to characterise the dose efficiency of the CT scanners operating in certain modes. The demand for clinically relevant IQ characterisation has naturally increased with the development of CT technology (detectors efficiency, image reconstruction and processing), resulting in the adaptation and evolution of assessment methods. The purpose of this review is to present the spectrum of various methods that have been used to characterise image quality in CT: from objective measurements of physical parameters to clinically task-based approaches (i.e. model observer (MO) approach) including pure human observer approach. When combined together with a dose indicator, a generalised dose efficiency index can be explored in a framework of system and patient dose optimisation. We will focus on the IQ methodologies that are required for dealing with standard reconstruction, but also for iterative reconstruction algorithms. With this concept the previously used FOM will be presented with a proposal to update them in order to make them relevant and up to date with technological progress. The MO that objectively assesses IQ for clinically relevant tasks represents the most promising method in terms of radiologist sensitivity performance and therefore of most relevance in the clinical environment.
Resumo:
La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.
Resumo:
First application of compositional data analysis techniques to Australian election data