970 resultados para Classifying Party Systems
Resumo:
Election forecasting models assume retrospective economic voting and clear mechanisms of accountability. Previous research indeed indicates that incumbent political parties are being held accountable for the state of the economy. In this article we develop a ‘hard case’ for the assumptions of election forecasting models. Belgium is a multiparty system with perennial coalition governments. Furthermore, Belgium has two completely segregated party systems (Dutch and French language). Since the prime minister during the period 1974-2011 has always been a Dutch language politician, French language voters could not even vote for the prime minister, so this cognitive shortcut to establish political accountability is not available. Results of an analysis for the French speaking parties (1981-2010) show that even in these conditions of opaque accountability, retrospective economic voting occurs as election results respond to indicators with regard to GDP and unemployment levels. Party membership figures can be used to model the popularity function in election forecasting.
Resumo:
Electoral researchers are so much accustomed to analyzing the choice of the single most preferred party as the left-hand side variable of their models of electoral behavior that they often ignore revealed preference data. Drawing on random utility theory, their models predict electoral behavior at the extensive margin of choice. Since the seminal work of Luce and others on individual choice behavior, however, many social science disciplines (consumer research, labor market research, travel demand, etc.) have extended their inventory of observed preference data with, for instance, multiple paired comparisons, complete or incomplete rankings, and multiple ratings. Eliciting (voter) preferences using these procedures and applying appropriate choice models is known to considerably increase the efficiency of estimates of causal factors in models of (electoral) behavior. In this paper, we demonstrate the efficiency gain when adding additional preference information to first preferences, up to full ranking data. We do so for multi-party systems of different sizes. We use simulation studies as well as empirical data from the 1972 German election study. Comparing the practical considerations for using ranking and single preference data results in suggestions for choice of measurement instruments in different multi-candidate and multi-party settings.
Resumo:
"...En el presente estudio se parte, en primera instancia, de la necesaria conceptualización del término, en general y particular, y en función de ello, posteriormente, a su tipificación, sistematización y análisis especifico en Latinoamérica, mientras que ofrece un panorama general de la aplicación de las barreras electorales explicitas en diferentes regiones del mundo, para luego analizar el caso colombiano en general y visualizar el impacto de la implementación de las barreras electorales explícitas a través del acto legislativo 01 de 2003 y sus efectos en los comicios nacionales de 2006."--introducción
Resumo:
Although strategic voting theory predicts that the number of parties will not exceed two in single-member district plurality systems, the observed number of parties often does. Previous research suggests that the reason why people vote for third parties is that they possess inaccurate information about the parties’ relative chances of winning. However, research has yet to determine whether third-party voting persists under conditions of accurate information. In this article, we examine whether possessing accurate information prevents individuals from voting for third-placed parties in the 2005 and 2010 British elections. We find that possessing accurate information does not prevent most individuals from voting for third-placed parties and that many voters possess reasonably accurate information regarding the viability of the parties in their constituencies. These findings suggest that arguments emphasizing levels of voter information as a major explanation for why multiparty systems often emerge in plurality systems are exaggerated.
Resumo:
In consensual (proportional) highly fragmented multiparty settings, political parties have two historical choices to make or pathways to follow: i) playing a majoritarian role by offering credible candidates to the head of the executive; or ii) playing the median legislator game. Each of those choices will have important consequences not only for the party system but also for the government. The purpose of this paper is to investigate the role played by median legislator parties on coalition management strategies of presidents in a comparative perspective. We analyze in depth the Brazilian case where the Partido do Movimento Democrático Brasileiro (PMDB) has basically functioned as the median legislator party in Congress by avoiding the approval of extreme policies, both on the left and on the right. Based on an expert survey in Latin America, we built an index of Pmdbismo and identified that there is a positive correlation between partisan fragmentation and median legislator parties. In addition, we investigate the effect of having a median legislator party in the governing coalition. We found that it is cheaper and less difficult for the government to manage the coalition having the median legislative party on board.
Resumo:
This paper presents an Airborne Systems Laboratory for Automation Research. The Airborne Systems Laboratory (ASL) is a Cessna 172 aircraft that has been specially modified and equipped by ARCAA specifically for research in future aircraft automation technologies, including Unmanned Airborne Systems (UAS). This capability has been developed over a long period of time, initially through the hire of aircraft, and finally through the purchase and modification of a dedicated flight-testing capability. The ASL has been equipped with a payload system that includes the provision of secure mounting, power, aircraft state data, flight management system and real-time subsystem. Finally, this system has been deployed in a cost effective platform allowing real-world flight-testing on a range of projects.
Resumo:
We present a novel approach for preprocessing systems of polynomial equations via graph partitioning. The variable-sharing graph of a system of polynomial equations is defined. If such graph is disconnected, then the corresponding system of equations can be split into smaller ones that can be solved individually. This can provide a tremendous speed-up in computing the solution to the system, but is unlikely to occur either randomly or in applications. However, by deleting certain vertices on the graph, the variable-sharing graph could be disconnected in a balanced fashion, and in turn the system of polynomial equations would be separated into smaller systems of near-equal sizes. In graph theory terms, this process is equivalent to finding balanced vertex partitions with minimum-weight vertex separators. The techniques of finding these vertex partitions are discussed, and experiments are performed to evaluate its practicality for general graphs and systems of polynomial equations. Applications of this approach in algebraic cryptanalysis on symmetric ciphers are presented: For the QUAD family of stream ciphers, we show how a malicious party can manufacture conforming systems that can be easily broken. For the stream ciphers Bivium and Trivium, we nachieve significant speedups in algebraic attacks against them, mainly in a partial key guess scenario. In each of these cases, the systems of polynomial equations involved are well-suited to our graph partitioning method. These results may open a new avenue for evaluating the security of symmetric ciphers against algebraic attacks.
Resumo:
This thesis conceptualises Use for IS (Information Systems) success. While Use in this study describes the extent to which an IS is incorporated into the user’s processes or tasks, success of an IS is the measure of the degree to which the person using the system is better off. For IS success, the conceptualisation of Use offers new perspectives on describing and measuring Use. We test the philosophies of the conceptualisation using empirical evidence in an Enterprise Systems (ES) context. Results from the empirical analysis contribute insights to the existing body of knowledge on the role of Use and demonstrate Use as an important factor and measure of IS success. System Use is a central theme in IS research. For instance, Use is regarded as an important dimension of IS success. Despite its recognition, the Use dimension of IS success reportedly suffers from an all too simplistic definition, misconception, poor specification of its complex nature, and an inadequacy of measurement approaches (Bokhari 2005; DeLone and McLean 2003; Zigurs 1993). Given the above, Burton-Jones and Straub (2006) urge scholars to revisit the concept of system Use, consider a stronger theoretical treatment, and submit the construct to further validation in its intended nomological net. On those considerations, this study re-conceptualises Use for IS success. The new conceptualisation adopts a work-process system-centric lens and draws upon the characteristics of modern system types, key user groups and their information needs, and the incorporation of IS in work processes. With these characteristics, the definition of Use and how it may be measured is systematically established. Use is conceptualised as a second-order measurement construct determined by three sub-dimensions: attitude of its users, depth, and amount of Use. The construct is positioned in a modified IS success research model, in an attempt to demonstrate its central role in determining IS success in an ES setting. A two-stage mixed-methods research design—incorporating a sequential explanatory strategy—was adopted to collect empirical data and to test the research model. The first empirical investigation involved an experiment and a survey of ES end users at a leading tertiary education institute in Australia. The second, a qualitative investigation, involved a series of interviews with real-world operational managers in large Indian private-sector companies to canvass their day-to-day experiences with ES. The research strategy adopted has a stronger quantitative leaning. The survey analysis results demonstrate the aptness of Use as an antecedent and a consequence of IS success, and furthermore, as a mediator between the quality of IS and the impacts of IS on individuals. Qualitative data analysis on the other hand, is used to derive a framework for classifying the diversity of ES Use behaviour. The qualitative results establish that workers Use IS in their context to orientate, negotiate, or innovate. The implications are twofold. For research, this study contributes to cumulative IS success knowledge an approach for defining, contextualising, measuring, and validating Use. For practice, research findings not only provide insights for educators when incorporating ES for higher education, but also demonstrate how operational managers incorporate ES into their work practices. Research findings leave the way open for future, larger-scale research into how industry practitioners interact with an ES to complete their work in varied organisational environments.
Resumo:
This paper proposes a model-based technique for lowering the entrance barrier for service providers to register services with a marketplace broker, such that the service is rapidly configured to utilize the brokerpsilas local service delivery management components. Specifically, it uses process modeling for supporting the execution steps of a service and shows how service delivery functions (e.g. payment points) ldquolocalrdquo to a service broker can be correctly configured into the process model. By formalizing the different operations in a service delivery function (like payment or settlement) and their allowable execution sequences (full payments must follow partial payments), including cross-function dependencies, it shows how through tool support, the non-technical user can quickly configure service delivery functions in a consistent and complete way.
Resumo:
Unmanned Aircraft Systems (UAS) are one of a number of emerging aviation sectors. Such new aviation concepts present a significant challenge to National Aviation Authorities (NAAs) charged with ensuring the safety of their operation within the existing airspace system. There is significant heritage in the existing body of aviation safety regulations for Conventionally Piloted Aircraft (CPA). It can be argued that the promulgation of these regulations has delivered a level of safety tolerable to society, thus justifying the “default position” of applying these same standards, regulations and regulatory structures to emerging aviation concepts such as UAS. An example of this is the proposed “1309” regulation for UAS, which is based on the 1309 regulation for CPA. However, the absence of a pilot on-board an unmanned aircraft creates a fundamentally different risk paradigm to that of CPA. An appreciation of these differences is essential to the justification of the “default position” and in turn, to ensure the development of effective safety standards and regulations for UAS. This paper explores the suitability of the proposed “1309” regulation for UAS. A detailed review of the proposed regulation is provided and a number of key assumptions are identified and discussed. A high-level model characterising the expected number of third party fatalities on the ground is then used to determine the impact of these assumptions. The results clearly show that the “one size fits all” approach to the definition of 1309 regulations for UAS, which mandates equipment design and installation requirements independent of where the UAS is to be operated, will not lead to an effective management of the risks.
Resumo:
This paper develops a framework for classifying term dependencies in query expansion with respect to the role terms play in structural linguistic associations. The framework is used to classify and compare the query expansion terms produced by the unigram and positional relevance models. As the unigram relevance model does not explicitly model term dependencies in its estimation process it is often thought to ignore dependencies that exist between words in natural language. The framework presented in this paper is underpinned by two types of linguistic association, namely syntagmatic and paradigmatic associations. It was found that syntagmatic associations were a more prevalent form of linguistic association used in query expansion. Paradoxically, it was the unigram model that exhibited this association more than the positional relevance model. This surprising finding has two potential implications for information retrieval models: (1) if linguistic associations underpin query expansion, then a probabilistic term dependence assumption based on position is inadequate for capturing them; (2) the unigram relevance model captures more term dependency information than its underlying theoretical model suggests, so its normative position as a baseline that ignores term dependencies should perhaps be reviewed.
Resumo:
The briefing paper was commissioned by the Council of Australian University Librarians (CAUL) to examine the current picture and evolving role of electronic textbooks (eTextbooks) and third party eLearning products in the academic arena. The study reviews industry trends, identifies the major players and considers the different stakeholder perspectives of eTextbook adoption. Within the context of learning and teaching in the digital age, specific areas of research, policy and practice are highlighted to consider the implications that eTextbooks might have for universities in general and for university libraries in particular. An environmental scan focused on the analysis of current developments and the anticipated future directions of digital learning resources in Australia, as well as in other major English speaking countries such as the United Kingdom and the United States. This research guided the development of key interview questions aimed at examining, at a deeper level, diverse stakeholder perspectives about the roles university libraries can play in the adoption of digital learning content.
Resumo:
Recent research has proposed Neo-Piagetian theory as a useful way of describing the cognitive development of novice programmers. Neo-Piagetian theory may also be a useful way to classify materials used in learning and assessment. If Neo-Piagetian coding of learning resources is to be useful then it is important that practitioners can learn it and apply it reliably. We describe the design of an interactive web-based tutorial for Neo-Piagetian categorization of assessment tasks. We also report an evaluation of the tutorial's effectiveness, in which twenty computer science educators participated. The average classification accuracy of the participants on each of the three Neo-Piagetian stages were 85%, 71% and 78%. Participants also rated their agreement with the expert classifications, and indicated high agreement (91%, 83% and 91% across the three Neo-Piagetian stages). Self-rated confidence in applying Neo-Piagetian theory to classifying programming questions before and after the tutorial were 29% and 75% respectively. Our key contribution is the demonstration of the feasibility of the Neo-Piagetian approach to classifying assessment materials, by demonstrating that it is learnable and can be applied reliably by a group of educators. Our tutorial is freely available as a community resource.