706 resultados para e-voting
Resumo:
A velocidade da informação e do conhecimento instaurou na sociedade contemporânea uma constante busca pela melhoria dos processos informacionais, com vistas a garantir maior rapidez nos processamentos e resultados. No universo público, as exigências caminham de modo similar, sob o olhar do eleitor cidadão, portanto, a proposta da pesquisa é promover um panorama do sistema eletrônico de votação brasileiro, mais precisamente a Urna Eletrônica e transitar desde a concepção do projeto nos idos da década de 90 até o momento atual, apontando um olhar científico para as ações comunicacionais do Tribunal Superior Eleitoral (TSE), no sentido de promover campanhas publicitárias para fomentar a conscientização do sistema informatizado de voto pelos eleitores, supostamente mais rápido e eficiente. A pesquisa utiliza para fins descritivos múltiplas visões da comunicação da urna: por intermédio do órgão mantenedor, os políticos, diretamente envolvidos no pleito competitivo e os consultores políticos, atuantes nas estratégias de bastidores das campanhas eleitorais. Essa diversidade de visões e posições acerca da credibilidade do sistema busca propiciar a pesquisa um caráter de macro compreensão dos impactos de um sistema informatizado em um ambiente democrático.(AU)
Resumo:
Automatic Term Recognition (ATR) is a fundamental processing step preceding more complex tasks such as semantic search and ontology learning. From a large number of methodologies available in the literature only a few are able to handle both single and multi-word terms. In this paper we present a comparison of five such algorithms and propose a combined approach using a voting mechanism. We evaluated the six approaches using two different corpora and show how the voting algorithm performs best on one corpus (a collection of texts from Wikipedia) and less well using the Genia corpus (a standard life science corpus). This indicates that choice and design of corpus has a major impact on the evaluation of term recognition algorithms. Our experiments also showed that single-word terms can be equally important and occupy a fairly large proportion in certain domains. As a result, algorithms that ignore single-word terms may cause problems to tasks built on top of ATR. Effective ATR systems also need to take into account both the unstructured text and the structured aspects and this means information extraction techniques need to be integrated into the term recognition process.
Resumo:
The past two decades have witnessed growing political disaffection and a widening mass/elite disjuncture in France, reflected in opinion polls, rising abstentionism, electoral volatility and fragmentation, with sustained voting against incumbent governments. Though the electoral system has preserved the duopoly of the mainstream coalitions, they have suffered loss of public confidence and swings in electoral support. Stable parliamentary majorities conceal a political landscape of assorted anti-system parties and growing support for far right and far left. The picture is paradoxical: the French express alienation from political parties yet relate positively to their political institutions; they berate national politicians but retain strong bonds with those elected locally; they appear increasingly disengaged from politics yet forms of ‘direct democracy’ are finding new vigour. While the electoral, attitudinal and systemic factors reviewed here may not signal a crisis of democracy, they point to serious problems of political representation in contemporary France.
Resumo:
Selecting the best alternative in a group decision making is a subject of many recent studies. The most popular method proposed for ranking the alternatives is based on the distance of each alternative to the ideal alternative. The ideal alternative may never exist; hence the ranking results are biased to the ideal point. The main aim in this study is to calculate a fuzzy ideal point that is more realistic to the crisp ideal point. On the other hand, recently Data Envelopment Analysis (DEA) is used to find the optimum weights for ranking the alternatives. This paper proposes a four stage approach based on DEA in the Fuzzy environment to aggregate preference rankings. An application of preferential voting system shows how the new model can be applied to rank a set of alternatives. Other two examples indicate the priority of the proposed method compared to the some other suggested methods.
Resumo:
Combining the results of classifiers has shown much promise in machine learning generally. However, published work on combining text categorizers suggests that, for this particular application, improvements in performance are hard to attain. Explorative research using a simple voting system is presented and discussed in the light of a probabilistic model that was originally developed for safety critical software. It was found that typical categorization approaches produce predictions which are too similar for combining them to be effective since they tend to fail on the same records. Further experiments using two less orthodox categorizers are also presented which suggest that combining text categorizers can be successful, provided the essential element of ‘difference’ is considered.
Resumo:
Electrocardiography (ECG) has been recently proposed as biometric trait for identification purposes. Intra-individual variations of ECG might affect identification performance. These variations are mainly due to Heart Rate Variability (HRV). In particular, HRV causes changes in the QT intervals along the ECG waveforms. This work is aimed at analysing the influence of seven QT interval correction methods (based on population models) on the performance of ECG-fiducial-based identification systems. In addition, we have also considered the influence of training set size, classifier, classifier ensemble as well as the number of consecutive heartbeats in a majority voting scheme. The ECG signals used in this study were collected from thirty-nine subjects within the Physionet open access database. Public domain software was used for fiducial points detection. Results suggested that QT correction is indeed required to improve the performance. However, there is no clear choice among the seven explored approaches for QT correction (identification rate between 0.97 and 0.99). MultiLayer Perceptron and Support Vector Machine seemed to have better generalization capabilities, in terms of classification performance, with respect to Decision Tree-based classifiers. No such strong influence of the training-set size and the number of consecutive heartbeats has been observed on the majority voting scheme.
Resumo:
The problem of the lack of answer in questions of survey is usually dealt with different estimation and classification procedures from the answers to other questions. In this document, the results of applying fuzzy control methods for the vote -one of the variables with bigger lack of answer in opinion polls- are presented.
Resumo:
The operation of technical processes requires increasingly advanced supervision and fault diagnostics to improve reliability and safety. This paper gives an introduction to the field of fault detection and diagnostics and has short methods classification. Growth of complexity and functional importance of inertial navigation systems leads to high losses at the equipment refusals. The paper is devoted to the INS diagnostics system development, allowing identifying the cause of malfunction. The practical realization of this system concerns a software package, performing a set of multidimensional information analysis. The project consists of three parts: subsystem for analyzing, subsystem for data collection and universal interface for open architecture realization. For a diagnostics improving in small analyzing samples new approaches based on pattern recognition algorithms voting and taking into account correlations between target and input parameters will be applied. The system now is at the development stage.
Resumo:
In this work the new pattern recognition method based on the unification of algebraic and statistical approaches is described. The main point of the method is the voting procedure upon the statistically weighted regularities, which are linear separators in two-dimensional projections of feature space. The report contains brief description of the theoretical foundations of the method, description of its software realization and the results of series of experiments proving its usefulness in practical tasks.
Resumo:
The task of smooth and stable decision rules construction in logical recognition models is considered. Logical regularities of classes are defined as conjunctions of one-place predicates that determine the membership of features values in an intervals of the real axis. The conjunctions are true on a special no extending subsets of reference objects of some class and are optimal. The standard approach of linear decision rules construction for given sets of logical regularities consists in realization of voting schemes. The weighting coefficients of voting procedures are done as heuristic ones or are as solutions of complex optimization task. The modifications of linear decision rules are proposed that are based on the search of maximal estimations of standard objects for their classes and use approximations of logical regularities by smooth sigmoid functions.
Resumo:
The “trial and error” method is fundamental for Master Minddecision algorithms. On the basis of Master Mind games and strategies weconsider some data mining methods for tests using students as teachers.Voting, twins, opposite, simulate and observer methods are investigated.For a pure data base these combinatorial algorithms are faster then manyAI and Master Mind methods. The complexities of these algorithms arecompared with basic combinatorial methods in AI. ACM Computing Classification System (1998): F.3.2, G.2.1, H.2.1, H.2.8, I.2.6.
Resumo:
The departmental elections of March 2015 redrew the French political landscape, setting the new terms of electoral competition in advance of the regional elections of December 2015 and, more critically, the presidential election of April–May 2017. These elections saw the far-right National Front (FN) come top in both rounds only to be outmanoeuvred by the mainstream parties and prevented from winning a single department. As a case study in vote–seat distortion, the elections highlighted a voting system effective in keeping the FN out of executive power but deficient in terms of democratic representation and inadequate as a response to the new tripartite realities of France's changing political landscape.
Resumo:
This thesis studies survival analysis techniques dealing with censoring to produce predictive tools that predict the risk of endovascular aortic aneurysm repair (EVAR) re-intervention. Censoring indicates that some patients do not continue follow up, so their outcome class is unknown. Methods dealing with censoring have drawbacks and cannot handle the high censoring of the two EVAR datasets collected. Therefore, this thesis presents a new solution to high censoring by modifying an approach that was incapable of differentiating between risks groups of aortic complications. Feature selection (FS) becomes complicated with censoring. Most survival FS methods depends on Cox's model, however machine learning classifiers (MLC) are preferred. Few methods adopted MLC to perform survival FS, but they cannot be used with high censoring. This thesis proposes two FS methods which use MLC to evaluate features. The two FS methods use the new solution to deal with censoring. They combine factor analysis with greedy stepwise FS search which allows eliminated features to enter the FS process. The first FS method searches for the best neural networks' configuration and subset of features. The second approach combines support vector machines, neural networks, and K nearest neighbor classifiers using simple and weighted majority voting to construct a multiple classifier system (MCS) for improving the performance of individual classifiers. It presents a new hybrid FS process by using MCS as a wrapper method and merging it with the iterated feature ranking filter method to further reduce the features. The proposed techniques outperformed FS methods based on Cox's model such as; Akaike and Bayesian information criteria, and least absolute shrinkage and selector operator in the log-rank test's p-values, sensitivity, and concordance. This proves that the proposed techniques are more powerful in correctly predicting the risk of re-intervention. Consequently, they enable doctors to set patients’ appropriate future observation plan.
Resumo:
We collect data about 172 countries: their parliaments, level of corruption, perceptions of corruption of parliament and political parties. We find weak empirical evidence supporting the conclusion that corruption increases as the number of parties increases. To provide a theoretical explanation of this finding we present a simple theoretical model of parliaments formed by parties, which must decide whether to accept or reject a proposal in the presence of a briber, who is interested in having the bill passed. We compute the number of deputies the briber needs to persuade on average in parliaments with different structures described by the number of parties, the voting quota, and the allocation of seats among parties. We find that the average number of seats needed to be bribed decreases as the number of parties increases. Restricting the minimal number of seats a party may have, we show that the average number of seats to be bribed is smaller in parliaments without small parties. Restricting the maximum number of seats a party may have, we find that under simple majority the average number of seats needed to be bribed is smaller for parliaments in which one party has majority, but under qualified majority it hardly changes.
Resumo:
This dissertation consists of three essays and investigates issues related to the impact of financial restatements on auditor change, shareholder actions, and executive turnover in the post-Sarbanes Oxley Act (SOX) period. For the first essay, we examined auditor change at 569 restatement firms and 5,605 control firms for 2004. We found that restatement announcements significantly increased the likelihood of auditor resignation and dismissal in the post-SOX period. The second essay examines shareholder voting on auditor ratifications in 2006 following restatement announcements by SEC registrants in 2005. The proportion of votes not supporting auditor ratification is low even in the presence of a restatement. However, we find that shareholders are more likely to vote against auditor ratification after a restatement when compared to votes at (a) films without restatements, or (b) restating firms in the preceding period. The third essay examines the consequences of financial restatements on the turnover of chief financial officers (CFO). We find that restatement announcements do significantly increase the likelihood of CFO turnover no matter whether the departure is voluntary or forced in nature. Also, the significant relationship is present even after controlling for the effect of CEO turnover.