934 resultados para Security classification (Government documents)
Resumo:
Later editions issued in three parts: State government, Government of counties, Government of cities and local agencies. In this library, the latter two are cataloged separately: Government of counties: KFC758.A29C3; Government of cities and local agencies: KFC752.A29A3.
Resumo:
"April 18,1990."
Resumo:
Promoted as the key policy response to unemployment, the Job Network constitutes an array of interlocking processes that position unemployed people as `problems' in need of remediation. Unemployment is presented as a primary risk threatening society, and unemployed people are presented as displaying various degrees of riskiness. The Job Seeker Classification Instrument (JSCI) is a `technology' employed by Centrelink to assess `risk' and to determine the type of interaction that unemployed people have with the job Network. In the first instance, we critically examine the development of the JSCI and expose issues that erode its credibility and legitimacy. Second, employing the analytical tools of discourse analysis, we show how the JSCI both assumes and imposes particular subject identities on unemployed people. The purpose of this latter analysis is to illustrate the consequences of the sorts of technologies and interventions used within the job Network.
Resumo:
Electronic communications devices intended for government or military applications must be rigorously evaluated to ensure that they maintain data confidentiality. High-grade information security evaluations require a detailed analysis of the device's design, to determine how it achieves necessary security functions. In practice, such evaluations are labour-intensive and costly, so there is a strong incentive to find ways to make the process more efficient. In this paper we show how well-known concepts from graph theory can be applied to a device's design to optimise information security evaluations. In particular, we use end-to-end graph traversals to eliminate components that do not need to be evaluated at all, and minimal cutsets to identify the smallest group of components that needs to be evaluated in depth.
Resumo:
Communications devices for government or military applications must keep data secure, even when their electronic components fail. Combining information flow and risk analyses could make fault-mode evaluations for such devices more efficient and cost-effective.
Resumo:
Very little information or research is available about the operation of Maximum Security Units (MSUs) in Queensland prisons. These units were developed within existing prisons in the early 1980s to deal with the incarceration of prisoners considered to be the worst and highest risk. Drawing on a number of interviews with prison visitors and on published documents and cases, this article examines the purpose and possible shortcomings of MSUs in Queensland in light of the Standard Guidelines for Corrections in Australia (1996).
Resumo:
Conventionally, document classification researches focus on improving the learning capabilities of classifiers. Nevertheless, according to our observation, the effectiveness of classification is limited by the suitability of document representation. Intuitively, the more features that are used in representation, the more comprehensive that documents are represented. However, if a representation contains too many irrelevant features, the classifier would suffer from not only the curse of high dimensionality, but also overfitting. To address this problem of suitableness of document representations, we present a classifier-independent approach to measure the effectiveness of document representations. Our approach utilises a labelled document corpus to estimate the distribution of documents in the feature space. By looking through documents in this way, we can clearly identify the contributions made by different features toward the document classification. Some experiments have been performed to show how the effectiveness is evaluated. Our approach can be used as a tool to assist feature selection, dimensionality reduction and document classification.
Resumo:
This thesis examines the external activities of the European Union conducted in the wider Europe against the backdrop of eastern enlargement. It focuses on the technical aspects of EU diplomacy, using qualitative research methodology to study the programmes and initiatives launched since the year 2000 in the countries lying along the Union’s new border to the east. Drawing on evidence from Ukraine, it hypothesises that the EU is an agent of transformation in the eastern neighbourhood and that this transformation has important implications for the regional order in the post-Soviet space. The thesis constitutes an investigation into the transformational activities engaged by the EU in Ukraine conducted with an eye to their strategic implications. It documents and analyses three instances of EU intervention in Ukraine’s internal processes that relate to management of cross-border traffic in the Ukrainian-Russian borderland, restructuring of the country’s energy sector, and conduct of its contentious presidential election in 2004. It is argued that while these interventions have explicitly sought to advance the Union’s security with respect to certain twenty-first century transnational threats, they have at the same time served to confer important strategic advantages on the EU that include giving the bloc greater knowledge and control over developments in Ukraine and that contribute to the dismantling of infrastructural, institutional and other ties between Kiev and the other Soviet successor states, notably Russia. The effect of the European Union’s actions in the region, whether intended or not, has thus been to undermine any competing regional initiatives that cut across its own functions, and thereby to assert itself as the primary integration project in Europe. By showing how technical interventions in the politics, economics and administration of Ukraine can yield important geopolitical dividends, this thesis demonstrates that, in the context of EU external relations, high and low politics are interlinked.
Resumo:
The proliferation of visual display terminals (VDTs) in offices is an international phenomenon. Numerous studies have investigated the health implications which can be categorised into visual problems, symptoms of musculo-skelctal discomfort, or psychosocial effects. The psychosocial effects are broader and there is mixed evidence in this area. The inconsistent results from the studies of VDT work so far undertaken may reflect several methodological shortcomings. In an attempt to overcome these deficiencies and to broaden the model of inter-relationships a model was developed to investigate their interactions and Ihc outputs of job satisfaction, stress and ill health. The study was a two-stage, long-term investigation with measures taken before the VDTs were introduced and the same measures taken 12 months after the 'go-live' date. The research was conducted in four offices of the Department of Social Security. The data were analysed for each individual site and in addition the total data were used in a path analysis model. Significant positive relationships were found at the pre-implementation stage between the musculo-skeletal discomfort, psychosomatic ailments, visual complaints and stress. Job satisfaction was negatively related to visual complaints and musculo-skeletal discomfort. Direct paths were found for age and job level with variety found in the job and age with job satisfaction and a negative relationship with the office environment. The only job characteristic which had a direct path to stress was 'dealing with others'. Similar inter-relationships were found in the post-implementation data. However, in addition attributes of the computer system, such as screen brightness and glare, were related positively with stress and negatively with job satisfaction. The comparison of the data at the two stages found that there had been no significant changes in the users' perceptions of their job characteristics and job satisfaction but there was a small and significant reduction in the stress measure.
Resumo:
Text classification is essential for narrowing down the number of documents relevant to a particular topic for further pursual, especially when searching through large biomedical databases. Protein-protein interactions are an example of such a topic with databases being devoted specifically to them. This paper proposed a semi-supervised learning algorithm via local learning with class priors (LL-CP) for biomedical text classification where unlabeled data points are classified in a vector space based on their proximity to labeled nodes. The algorithm has been evaluated on a corpus of biomedical documents to identify abstracts containing information about protein-protein interactions with promising results. Experimental results show that LL-CP outperforms the traditional semisupervised learning algorithms such as SVMand it also performs better than local learning without incorporating class priors.
Resumo:
We propose a novel framework where an initial classifier is learned by incorporating prior information extracted from an existing sentiment lexicon. Preferences on expectations of sentiment labels of those lexicon words are expressed using generalized expectation criteria. Documents classified with high confidence are then used as pseudo-labeled examples for automatical domain-specific feature acquisition. The word-class distributions of such self-learned features are estimated from the pseudo-labeled examples and are used to train another classifier by constraining the model's predictions on unlabeled instances. Experiments on both the movie review data and the multi-domain sentiment dataset show that our approach attains comparable or better performance than exiting weakly-supervised sentiment classification methods despite using no labeled documents.
Resumo:
Short text messages a.k.a Microposts (e.g. Tweets) have proven to be an effective channel for revealing information about trends and events, ranging from those related to Disaster (e.g. hurricane Sandy) to those related to Violence (e.g. Egyptian revolution). Being informed about such events as they occur could be extremely important to authorities and emergency professionals by allowing such parties to immediately respond. In this work we study the problem of topic classification (TC) of Microposts, which aims to automatically classify short messages based on the subject(s) discussed in them. The accurate TC of Microposts however is a challenging task since the limited number of tokens in a post often implies a lack of sufficient contextual information. In order to provide contextual information to Microposts, we present and evaluate several graph structures surrounding concepts present in linked knowledge sources (KSs). Traditional TC techniques enrich the content of Microposts with features extracted only from the Microposts content. In contrast our approach relies on the generation of different weighted semantic meta-graphs extracted from linked KSs. We introduce a new semantic graph, called category meta-graph. This novel meta-graph provides a more fine grained categorisation of concepts providing a set of novel semantic features. Our findings show that such category meta-graph features effectively improve the performance of a topic classifier of Microposts. Furthermore our goal is also to understand which semantic feature contributes to the performance of a topic classifier. For this reason we propose an approach for automatic estimation of accuracy loss of a topic classifier on new, unseen Microposts. We introduce and evaluate novel topic similarity measures, which capture the similarity between the KS documents and Microposts at a conceptual level, considering the enriched representation of these documents. Extensive evaluation in the context of Emergency Response (ER) and Violence Detection (VD) revealed that our approach outperforms previous approaches using single KS without linked data and Twitter data only up to 31.4% in terms of F1 measure. Our main findings indicate that the new category graph contains useful information for TC and achieves comparable results to previously used semantic graphs. Furthermore our results also indicate that the accuracy of a topic classifier can be accurately predicted using the enhanced text representation, outperforming previous approaches considering content-based similarity measures. © 2014 Elsevier B.V. All rights reserved.
Resumo:
Intrusion detection is a critical component of security information systems. The intrusion detection process attempts to detect malicious attacks by examining various data collected during processes on the protected system. This paper examines the anomaly-based intrusion detection based on sequences of system calls. The point is to construct a model that describes normal or acceptable system activity using the classification trees approach. The created database is utilized as a basis for distinguishing the intrusive activity from the legal one using string metric algorithms. The major results of the implemented simulation experiments are presented and discussed as well.