970 resultados para Classifying Party Systems
Resumo:
What explains the length of a Member of the European Parliament’s career? Little evidence of careerism has been uncovered in the European Parliament, particularly when compared to studies of legislator tenure in the U.S. Congress. Due to the different historical contexts in which these two legislatures developed, it seems reasonable to rule out many of the explanations used to account for increasing careerism in Congress in searching for the influences on legislator tenure in the European Parliament. This paper therefore proposes three potential models of careerism in the European Parliament: an electoral systems model, a party model, and an individual model. While the data necessary to test these models has not been fully compiled, this paper outlines the major hypotheses of each model and details plans for the operationalization of all independent and control variables. These models are not intended to be mutually exclusive alternatives, but rather each explanation is expected to influence each MEP in varying degrees.
Resumo:
There is growing interest in comparing patterns of social and health service development in advanced Asian economies. Most publications concentrate broadly on a range of core social services such as education, housing, social security and health care. In terms of those solely focused on health, most discuss arrangements in specific countries and territories. Some take a comparative approach, but are focused on presentation and discussion of expenditure, resourcing and service utilization data. This article extends the comparative analysis of advanced Asian health systems, considering the cases of Japan, South Korea, Taiwan, Hong Kong and Singapore. The article provides basic background information, and delves into common concerns among the world's health systems today including primary care organization, rationing and cost containment, service quality, and system integration. Conclusions include that problems exist in 'classifying' the five diverse systems; that the systems face common pressures; and that there are considerable opportunities to enhance primary care, service quality and system integration. (c) 2006 Elsevier Ireland Ltd. All rights reserved.
Resumo:
The advent of personal communication systems within the last decade has depended upon the utilization of advanced digital schemes for source and channel coding and for modulation. The inherent digital nature of the communications processing has allowed the convenient incorporation of cryptographic techniques to implement security in these communications systems. There are various security requirements, of both the service provider and the mobile subscriber, which may be provided for in a personal communications system. Such security provisions include the privacy of user data, the authentication of communicating parties, the provision for data integrity, and the provision for both location confidentiality and party anonymity. This thesis is concerned with an investigation of the private-key and public-key cryptographic techniques pertinent to the security requirements of personal communication systems and an analysis of the security provisions of Second-Generation personal communication systems is presented. Particular attention has been paid to the properties of the cryptographic protocols which have been employed in current Second-Generation systems. It has been found that certain security-related protocols implemented in the Second-Generation systems have specific weaknesses. A theoretical evaluation of these protocols has been performed using formal analysis techniques and certain assumptions made during the development of the systems are shown to contribute to the security weaknesses. Various attack scenarios which exploit these protocol weaknesses are presented. The Fiat-Sharmir zero-knowledge cryptosystem is presented as an example of how asymmetric algorithm cryptography may be employed as part of an improved security solution. Various modifications to this cryptosystem have been evaluated and their critical parameters are shown to be capable of being optimized to suit a particular applications. The implementation of such a system using current smart card technology has been evaluated.
The role of third party logistics providers (3PLs) in the adoption of green supply chain initiatives
Resumo:
The increasing importance of environmental sustainability has sharpened the focus on the need for innovative approaches to the purchasing of transport and logistics services. This article points out some of the challenges that purchasers of transport and logistics services, as well as their suppliers in the third party logistics (3PL) industry, are facing. These include the need for closer collaboration between 3PLs and their customers, as well as developing systems for the robust assessment of the environmental sustainability of services. The article is based on several years’ research experience in Ireland, Italy and Sweden.
Resumo:
As the number of using 3PL providers are increasing rapidly in recent years, 3PL providers play a major role in the logistics industry. Due to customers demands are raising and changing, it has facilitated 3PL providers to invest IT systems that could meet customer requirements and create competitive advantage. The use of IT systems could assist 3PL providers to achieve supply chain visibility and enhance supply chain collaboration with business partners. In this paper, it is mainly focus on the Europe and Far East 3PL providers in terms of current and future IT systems, IT motivators and barriers, as well as the future supply chain demands that address by IT systems. The common IT system that implemented in both regions is information technology, which is mainly used to collaborate and share information with supply chain partners. Some of the common motivations and barriers were existed and 3PL providers need to be understood. Given the future demands of IT implementation and supply chain collaboration, IT systems such as RFID and integration systems would be strongly focus in the future. The suggestion about the advanced integration system such as business process management (BPM) could be the next key IT systems in the future logistics industry. © 2012 AICIT.
Resumo:
One of the main challenges of classifying clinical data is determining how to handle missing features. Most research favours imputing of missing values or neglecting records that include missing data, both of which can degrade accuracy when missing values exceed a certain level. In this research we propose a methodology to handle data sets with a large percentage of missing values and with high variability in which particular data are missing. Feature selection is effected by picking variables sequentially in order of maximum correlation with the dependent variable and minimum correlation with variables already selected. Classification models are generated individually for each test case based on its particular feature set and the matching data values available in the training population. The method was applied to real patients' anonymous mental-health data where the task was to predict the suicide risk judgement clinicians would give for each patient's data, with eleven possible outcome classes: zero to ten, representing no risk to maximum risk. The results compare favourably with alternative methods and have the advantage of ensuring explanations of risk are based only on the data given, not imputed data. This is important for clinical decision support systems using human expertise for modelling and explaining predictions.
Resumo:
The paper considers a general model of electoral systems combining district-based elections with a compensatory mechanism in order to implement any outcome between strictly majoritarian and purely proportional seat allocation. It contains vote transfer and allows for the application of three different correction formulas. Analysis in a two-party system shows that a trade-off exists for the dominant party between the expected seat share and the chance of obtaining majority. Vote transfer rules are also investigated by focusing on the possibility of manipulation. The model is applied to the 2014 Hungarian parliamentary election. Hypothetical results reveal that the vote transfer rule cannot be evaluated in itself, only together with the share of constituency seats. With an appropriate choice of the latter, the three mechanisms can be made functionally equivalent.
Resumo:
Acknowledgements This work was supported by University of Delhi, Department of Science and Technology- Promotion of University Research and Scientific Excellence (DST-PURSE). V.G., S.H. and U.S. gratefully acknowledge the Council for Scientific and Industrial Research (CSIR), University Grant Commission (UGC) and Department of Biotechnology (DBT) for providing research fellowship.
Resumo:
Acknowledgements We acknowledge gratefully the support of BMBF, CoNDyNet, FK. 03SF0472A, of the EIT Climate-KIC project SWIPO and Nora Molkenthin for illustrating our illustration of the concept of survivability using penguins. We thank Martin Rohden for providing us with the UK high-voltage transmission grid topology and Yang Tang for very useful discussions. The publication of this article was funded by the Open Access Fund of the Leibniz Association.
Resumo:
In this paper, we describe a decentralized privacy-preserving protocol for securely casting trust ratings in distributed reputation systems. Our protocol allows n participants to cast their votes in a way that preserves the privacy of individual values against both internal and external attacks. The protocol is coupled with an extensive theoretical analysis in which we formally prove that our protocol is resistant to collusion against as many as n-1 corrupted nodes in the semi-honest model. The behavior of our protocol is tested in a real P2P network by measuring its communication delay and processing overhead. The experimental results uncover the advantages of our protocol over previous works in the area; without sacrificing security, our decentralized protocol is shown to be almost one order of magnitude faster than the previous best protocol for providing anonymous feedback.
Resumo:
Rhythm analysis of written texts focuses on literary analysis and it mainly considers poetry. In this paper we investigate the relevance of rhythmic features for categorizing texts in prosaic form pertaining to different genres. Our contribution is threefold. First, we define a set of rhythmic features for written texts. Second, we extract these features from three corpora, of speeches, essays, and newspaper articles. Third, we perform feature selection by means of statistical analyses, and determine a subset of features which efficiently discriminates between the three genres. We find that using as little as eight rhythmic features, documents can be adequately assigned to a given genre with an accuracy of around 80 %, significantly higher than the 33 % baseline which results from random assignment.
Resumo:
Thesis (Master's)--University of Washington, 2016-08
Resumo:
A primary goal of context-aware systems is delivering the right information at the right place and right time to users in order to enable them to make effective decisions and improve their quality of life. There are three key requirements for achieving this goal: determining what information is relevant, personalizing it based on the users’ context (location, preferences, behavioral history etc.), and delivering it to them in a timely manner without an explicit request from them. These requirements create a paradigm that we term as “Proactive Context-aware Computing”. Most of the existing context-aware systems fulfill only a subset of these requirements. Many of these systems focus only on personalization of the requested information based on users’ current context. Moreover, they are often designed for specific domains. In addition, most of the existing systems are reactive - the users request for some information and the system delivers it to them. These systems are not proactive i.e. they cannot anticipate users’ intent and behavior and act proactively without an explicit request from them. In order to overcome these limitations, we need to conduct a deeper analysis and enhance our understanding of context-aware systems that are generic, universal, proactive and applicable to a wide variety of domains. To support this dissertation, we explore several directions. Clearly the most significant sources of information about users today are smartphones. A large amount of users’ context can be acquired through them and they can be used as an effective means to deliver information to users. In addition, social media such as Facebook, Flickr and Foursquare provide a rich and powerful platform to mine users’ interests, preferences and behavioral history. We employ the ubiquity of smartphones and the wealth of information available from social media to address the challenge of building proactive context-aware systems. We have implemented and evaluated a few approaches, including some as part of the Rover framework, to achieve the paradigm of Proactive Context-aware Computing. Rover is a context-aware research platform which has been evolving for the last 6 years. Since location is one of the most important context for users, we have developed ‘Locus’, an indoor localization, tracking and navigation system for multi-story buildings. Other important dimensions of users’ context include the activities that they are engaged in. To this end, we have developed ‘SenseMe’, a system that leverages the smartphone and its multiple sensors in order to perform multidimensional context and activity recognition for users. As part of the ‘SenseMe’ project, we also conducted an exploratory study of privacy, trust, risks and other concerns of users with smart phone based personal sensing systems and applications. To determine what information would be relevant to users’ situations, we have developed ‘TellMe’ - a system that employs a new, flexible and scalable approach based on Natural Language Processing techniques to perform bootstrapped discovery and ranking of relevant information in context-aware systems. In order to personalize the relevant information, we have also developed an algorithm and system for mining a broad range of users’ preferences from their social network profiles and activities. For recommending new information to the users based on their past behavior and context history (such as visited locations, activities and time), we have developed a recommender system and approach for performing multi-dimensional collaborative recommendations using tensor factorization. For timely delivery of personalized and relevant information, it is essential to anticipate and predict users’ behavior. To this end, we have developed a unified infrastructure, within the Rover framework, and implemented several novel approaches and algorithms that employ various contextual features and state of the art machine learning techniques for building diverse behavioral models of users. Examples of generated models include classifying users’ semantic places and mobility states, predicting their availability for accepting calls on smartphones and inferring their device charging behavior. Finally, to enable proactivity in context-aware systems, we have also developed a planning framework based on HTN planning. Together, these works provide a major push in the direction of proactive context-aware computing.
Resumo:
Secure Multi-party Computation (MPC) enables a set of parties to collaboratively compute, using cryptographic protocols, a function over their private data in a way that the participants do not see each other's data, they only see the final output. Typical MPC examples include statistical computations over joint private data, private set intersection, and auctions. While these applications are examples of monolithic MPC, richer MPC applications move between "normal" (i.e., per-party local) and "secure" (i.e., joint, multi-party secure) modes repeatedly, resulting overall in mixed-mode computations. For example, we might use MPC to implement the role of the dealer in a game of mental poker -- the game will be divided into rounds of local decision-making (e.g. bidding) and joint interaction (e.g. dealing). Mixed-mode computations are also used to improve performance over monolithic secure computations. Starting with the Fairplay project, several MPC frameworks have been proposed in the last decade to help programmers write MPC applications in a high-level language, while the toolchain manages the low-level details. However, these frameworks are either not expressive enough to allow writing mixed-mode applications or lack formal specification, and reasoning capabilities, thereby diminishing the parties' trust in such tools, and the programs written using them. Furthermore, none of the frameworks provides a verified toolchain to run the MPC programs, leaving the potential of security holes that can compromise the privacy of parties' data. This dissertation presents language-based techniques to make MPC more practical and trustworthy. First, it presents the design and implementation of a new MPC Domain Specific Language, called Wysteria, for writing rich mixed-mode MPC applications. Wysteria provides several benefits over previous languages, including a conceptual single thread of control, generic support for more than two parties, high-level abstractions for secret shares, and a fully formalized type system and operational semantics. Using Wysteria, we have implemented several MPC applications, including, for the first time, a card dealing application. The dissertation next presents Wys*, an embedding of Wysteria in F*, a full-featured verification oriented programming language. Wys* improves on Wysteria along three lines: (a) It enables programmers to formally verify the correctness and security properties of their programs. As far as we know, Wys* is the first language to provide verification capabilities for MPC programs. (b) It provides a partially verified toolchain to run MPC programs, and finally (c) It enables the MPC programs to use, with no extra effort, standard language constructs from the host language F*, thereby making it more usable and scalable. Finally, the dissertation develops static analyses that help optimize monolithic MPC programs into mixed-mode MPC programs, while providing similar privacy guarantees as the monolithic versions.
Resumo:
Security defects are common in large software systems because of their size and complexity. Although efficient development processes, testing, and maintenance policies are applied to software systems, there are still a large number of vulnerabilities that can remain, despite these measures. Some vulnerabilities stay in a system from one release to the next one because they cannot be easily reproduced through testing. These vulnerabilities endanger the security of the systems. We propose vulnerability classification and prediction frameworks based on vulnerability reproducibility. The frameworks are effective to identify the types and locations of vulnerabilities in the earlier stage, and improve the security of software in the next versions (referred to as releases). We expand an existing concept of software bug classification to vulnerability classification (easily reproducible and hard to reproduce) to develop a classification framework for differentiating between these vulnerabilities based on code fixes and textual reports. We then investigate the potential correlations between the vulnerability categories and the classical software metrics and some other runtime environmental factors of reproducibility to develop a vulnerability prediction framework. The classification and prediction frameworks help developers adopt corresponding mitigation or elimination actions and develop appropriate test cases. Also, the vulnerability prediction framework is of great help for security experts focus their effort on the top-ranked vulnerability-prone files. As a result, the frameworks decrease the number of attacks that exploit security vulnerabilities in the next versions of the software. To build the classification and prediction frameworks, different machine learning techniques (C4.5 Decision Tree, Random Forest, Logistic Regression, and Naive Bayes) are employed. The effectiveness of the proposed frameworks is assessed based on collected software security defects of Mozilla Firefox.