823 resultados para legacy information systems
Resumo:
Purpose
– Information science has been conceptualized as a partly unreflexive response to developments in information and computer technology, and, most powerfully, as part of the gestalt of the computer. The computer was viewed as an historical accident in the original formulation of the gestalt. An alternative, and timely, approach to understanding, and then dissolving, the gestalt would be to address the motivating technology directly, fully recognizing it as a radical human construction. This paper aims to address the issues.
Design/methodology/approach
– The paper adopts a social epistemological perspective and is concerned with collective, rather than primarily individual, ways of knowing.
Findings
– Information technology tends to be received as objectively given, autonomously developing, and causing but not itself caused, by the language of discussions in information science. It has also been characterized as artificial, in the sense of unnatural, and sometimes as threatening. Attitudes to technology are implied, rather than explicit, and can appear weak when articulated, corresponding to collective repression.
Research limitations/implications
– Receiving technology as objectively given has an analogy with the Platonist view of mathematical propositions as discovered, in its exclusion of human activity, opening up the possibility of a comparable critique which insists on human agency.
Originality/value
– Apprehensions of information technology have been raised to consciousness, exposing their limitations.
Resumo:
This article draws on qualitative research that explores the concept of public value in the delivery of sport services by the organization Sport England. The research took place against a backdrop of shifting priorities following the award of the 2012 Olympic Games to London. It highlights the difficulties that exist in measuring the qualitative nature of the public value of sport and suggests there is a need to understand better the idea. Research with organizations involved alongside Sport England in the delivery of sport is described. This explores the potential to create a public value vision, how to measure it and how to focus public value on delivery beyond the aim of ‘sport for sports sake’ and more towards ‘sport for the greater good’. The article argues that this represents a game of ‘two halves’ in which the first half focuses on 2012 with the second half concerned with its legacy.
Resumo:
Voice over IP (VoIP) has experienced a tremendous growth over the last few years and is now widely used among the population and for business purposes. The security of such VoIP systems is often assumed, creating a false sense of privacy. This paper investigates in detail the leakage of information from Skype, a widely used and protected VoIP application. Experiments have shown that isolated phonemes can be classified and given sentences identified. By using the dynamic time warping (DTW) algorithm, frequently used in speech processing, an accuracy of 60% can be reached. The results can be further improved by choosing specific training data and reach an accuracy of 83% under specific conditions. The initial results being speaker dependent, an approach involving the Kalman filter is proposed to extract the kernel of all training signals.
Resumo:
Learning or writing regular expressions to identify instances of a specific
concept within text documents with a high precision and recall is challenging.
It is relatively easy to improve the precision of an initial regular expression
by identifying false positives covered and tweaking the expression to avoid the
false positives. However, modifying the expression to improve recall is difficult
since false negatives can only be identified by manually analyzing all documents,
in the absence of any tools to identify the missing instances. We focus on partially
automating the discovery of missing instances by soliciting minimal user
feedback. We present a technique to identify good generalizations of a regular
expression that have improved recall while retaining high precision. We empirically
demonstrate the effectiveness of the proposed technique as compared to
existing methods and show results for a variety of tasks such as identification of
dates, phone numbers, product names, and course numbers on real world datasets
Resumo:
There has been much interest in the belief–desire–intention (BDI) agent-based model for developing scalable intelligent systems, e.g. using the AgentSpeak framework. However, reasoning from sensor information in these large-scale systems remains a significant challenge. For example, agents may be faced with information from heterogeneous sources which is uncertain and incomplete, while the sources themselves may be unreliable or conflicting. In order to derive meaningful conclusions, it is important that such information be correctly modelled and combined. In this paper, we choose to model uncertain sensor information in Dempster–Shafer (DS) theory. Unfortunately, as in other uncertainty theories, simple combination strategies in DS theory are often too restrictive (losing valuable information) or too permissive (resulting in ignorance). For this reason, we investigate how a context-dependent strategy originally defined for possibility theory can be adapted to DS theory. In particular, we use the notion of largely partially maximal consistent subsets (LPMCSes) to characterise the context for when to use Dempster’s original rule of combination and for when to resort to an alternative. To guide this process, we identify existing measures of similarity and conflict for finding LPMCSes along with quality of information heuristics to ensure that LPMCSes are formed around high-quality information. We then propose an intelligent sensor model for integrating this information into the AgentSpeak framework which is responsible for applying evidence propagation to construct compatible information, for performing context-dependent combination and for deriving beliefs for revising an agent’s belief base. Finally, we present a power grid scenario inspired by a real-world case study to demonstrate our work.
Resumo:
Purpose – The aim of this article is to present some results from research undertaken into the information behaviour of European Documentation Centre (EDC) users. It will reflect on the practices of a group of 234 users of 55 EDCs covering 21 Member States of the European Union (EU), used to access European information. Design/methodology/approach – In order to collect the data presented here, five questionnaires were sent to users in all the EDCs in Finland, Ireland, Hungary and Portugal. In the remaining EU countries, five questionnaires were sent to two EDCs chosen at random. The questionnaires were sent by post, following telephone contact with the EDC managers. Findings – Factors determining access to information on the European Union and the frequency of this access are identified. The information providers most commonly used to access European information and the information sources considered the most reliable by respondents will also be analysed. Another area of analysis concerns the factors cited by respondents as facilitating access to information on Europe or, conversely, making it more difficult to access. Parallel to this, the aspects of accessing information on EU that are valued most by users will also be assessed. Research limitations/implications – Questionnaires had to be used, as the intention was to cover a very extensive geographical area. However, in opting for closed questions, it is acknowledged that standard responses have been obtained with no scope for capturing the individual circumstances of each respondent, thus making a qualitative approach difficult. Practical implications – The results provide an overall picture of certain aspects of the information behaviour of EDC users. They may serve as a starting point for planning training sessions designed to develop the skills required to search, access, evaluate and apply European information within an academic context. From a broader perspective, they also constitute factors which the European Commission should take into consideration when formulating its information and communication policy. Originality/value – This is the first piece of academic research into the EDCs and their users, which aimed to cover all Members State of the EU.
Resumo:
The processes of mobilization of land for infrastructures of public and private domain are developed according to proper legal frameworks and systematically confronted with the impoverished national situation as regards the cadastral identification and regularization, which leads to big inefficiencies, sometimes with very negative impact to the overall effectiveness. This project report describes Ferbritas Cadastre Information System (FBSIC) project and tools, which in conjunction with other applications, allow managing the entire life-cycle of Land Acquisition and Cadastre, including support to field activities with the integration of information collected in the field, the development of multi-criteria analysis information, monitoring all information in the exploration stage, and the automated generation of outputs. The benefits are evident at the level of operational efficiency, including tools that enable process integration and standardization of procedures, facilitate analysis and quality control and maximize performance in the acquisition, maintenance and management of registration information and expropriation (expropriation projects). Therefore, the implemented system achieves levels of robustness, comprehensiveness, openness, scalability and reliability suitable for a structural platform. The resultant solution, FBSIC, is a fit-for-purpose cadastre information system rooted in the field of railway infrastructures. FBSIC integrating nature of allows: to accomplish present needs and scale to meet future services; to collect, maintain, manage and share all information in one common platform, and transform it into knowledge; to relate with other platforms; to increase accuracy and productivity of business processes related with land property management.
Resumo:
In the recent past, hardly anyone could predict this course of GIS development. GIS is moving from desktop to cloud. Web 2.0 enabled people to input data into web. These data are becoming increasingly geolocated. Big amounts of data formed something that is called "Big Data". Scientists still don't know how to deal with it completely. Different Data Mining tools are used for trying to extract some useful information from this Big Data. In our study, we also deal with one part of these data - User Generated Geographic Content (UGGC). The Panoramio initiative allows people to upload photos and describe them with tags. These photos are geolocated, which means that they have exact location on the Earth's surface according to a certain spatial reference system. By using Data Mining tools, we are trying to answer if it is possible to extract land use information from Panoramio photo tags. Also, we tried to answer to what extent this information could be accurate. At the end, we compared different Data Mining methods in order to distinguish which one has the most suited performances for this kind of data, which is text. Our answers are quite encouraging. With more than 70% of accuracy, we proved that extracting land use information is possible to some extent. Also, we found Memory Based Reasoning (MBR) method the most suitable method for this kind of data in all cases.
Resumo:
Information systems are widespread and used by anyone with computing devices as well as corporations and governments. It is often the case that security leaks are introduced during the development of an application. Reasons for these security bugs are multiple but among them one can easily identify that it is very hard to define and enforce relevant security policies in modern software. This is because modern applications often rely on container sharing and multi-tenancy where, for instance, data can be stored in the same physical space but is logically mapped into different security compartments or data structures. In turn, these security compartments, to which data is classified into in security policies, can also be dynamic and depend on runtime data. In this thesis we introduce and develop the novel notion of dependent information flow types, and focus on the problem of ensuring data confidentiality in data-centric software. Dependent information flow types fit within the standard framework of dependent type theory, but, unlike usual dependent types, crucially allow the security level of a type, rather than just the structural data type itself, to depend on runtime values. Our dependent function and dependent sum information flow types provide a direct, natural and elegant way to express and enforce fine grained security policies on programs. Namely programs that manipulate structured data types in which the security level of a structure field may depend on values dynamically stored in other fields The main contribution of this work is an efficient analysis that allows programmers to verify, during the development phase, whether programs have information leaks, that is, it verifies whether programs protect the confidentiality of the information they manipulate. As such, we also implemented a prototype typechecker that can be found at http://ctp.di.fct.unl.pt/DIFTprototype/.
Resumo:
[Table des matières] 1. Introduction. 2. Structure (introduction, hiérarchie). 3. Processus (généralités, flux de clientèle, flux d'activité, flux de ressources, aspects temporels, aspects comptables). 4. Descripteurs (qualification, quantification). 5. Indicateurs (définitions, productivité, pertinence, adéquation, efficacité, effectivité, efficience, standards). 6. Bibliographie.
Resumo:
This thesis examines coordination of systems development process in a contemporary software producing organization. The thesis consists of a series of empirical studies in which the actions, conceptions and artifacts of practitioners are analyzed using a theory-building case study research approach. The three phases of the thesis provide empirical observations on different aspects of systemsdevelopment. In the first phase is examined the role of architecture in coordination and cost estimation in multi-site environment. The second phase involves two studies on the evolving requirement understanding process and how to measure this process. The third phase summarizes the first two phases and concentrates on the role of methods and how practitioners work with them. All the phases provide evidence that current systems development method approaches are too naïve in looking at the complexity of the real world. In practice, development is influenced by opportunity and other contingent factors. The systems development processis not coordinated using phases and tasks defined in methods providing universal mechanism for managing this process like most of the method approaches assume.Instead, the studies suggest that managing systems development process happens through coordinating development activities using methods as tools. These studies contribute to the systems development methods by emphasizing the support of communication and collaboration between systems development participants. Methods should not describe the development activities and phases in a detail level, butshould include the higher level guidance for practitioners on how to act in different systems development environments.