24 resultados para Computer and Information Sciences
em Repositório Institucional UNESP - Universidade Estadual Paulista "Julio de Mesquita Filho"
Resumo:
Medical images are private to doctor and patient. Digital medical images should be protected against unauthorized viewers. One way to protect digital medical images is using cryptography to encrypt the images. This paper proposes a method for encrypting medical images with a traditional symmetric cryptosystem. We use biometrics to protect the cryptographic key. Both encrypted image and cryptographic key can be transmitted over public networks with security and only the person that owns the biometrics information used in key protection can decrypt the medical image. © Springer Science+Business Media B.V. 2008.
Resumo:
This paper reports a research to evaluate the potential and the effects of use of annotated Paraconsistent logic in automatic indexing. This logic attempts to deal with contradictions, concerned with studying and developing inconsistency-tolerant systems of logic. This logic, being flexible and containing logical states that go beyond the dichotomies yes and no, permits to advance the hypothesis that the results of indexing could be better than those obtained by traditional methods. Interactions between different disciplines, as information retrieval, automatic indexing, information visualization, and nonclassical logics were considered in this research. From the methodological point of view, an algorithm for treatment of uncertainty and imprecision, developed under the Paraconsistent logic, was used to modify the values of the weights assigned to indexing terms of the text collections. The tests were performed on an information visualization system named Projection Explorer (PEx), created at Institute of Mathematics and Computer Science (ICMC - USP Sao Carlos), with available source code. PEx uses traditional vector space model to represent documents of a collection. The results were evaluated by criteria built in the information visualization system itself, and demonstrated measurable gains in the quality of the displays, confirming the hypothesis that the use of the para-analyser under the conditions of the experiment has the ability to generate more effective clusters of similar documents. This is a point that draws attention, since the constitution of more significant clusters can be used to enhance information indexing and retrieval. It can be argued that the adoption of non-dichotomous (non-exclusive) parameters provides new possibilities to relate similar information.
Resumo:
MODSI is a multi-models tool for information systems modeling. A modeling process in MODSI can be driven according to three different approaches: informal, semi-formal and formal. The MODSI tool is therefore based on the linked usage of these three modeling approaches. It can be employed at two different levels: the meta-modeling of a method and the modeling of an information system.In this paper we start presenting different types of modeling by making an analysis of their particular features. Then, we introduce the meta-model defined in our tool, as well as the tool functional architecture. Finally, we describe and illustrate the various usage levels of this tool.
Resumo:
The aggregation theory of mathematical programming is used to study decentralization in convex programming models. A two-level organization is considered and a aggregation-disaggregation scheme is applied to such a divisionally organized enterprise. In contrast to the known aggregation techniques, where the decision variables/production planes are aggregated, it is proposed to aggregate resources allocated by the central planning department among the divisions. This approach results in a decomposition procedure, in which the central unit has no optimization problem to solve and should only average local information provided by the divisions.
Resumo:
Managing the great complexity of enterprise system, due to entities numbers, decision and process varieties involved to be controlled results in a very hard task because deals with the integration of its operations and its information systems. Moreover, the enterprises find themselves in a constant changing process, reacting in a dynamic and competitive environment where their business processes are constantly altered. The transformation of business processes into models allows to analyze and redefine them. Through computing tools usage it is possible to minimize the cost and risks of an enterprise integration design. This article claims for the necessity of modeling the processes in order to define more precisely the enterprise business requirements and the adequate usage of the modeling methodologies. Following these patterns, the paper concerns the process modeling relative to the domain of demand forecasting as a practical example. The domain of demand forecasting was built based on a theoretical review. The resulting models considered as reference model are transformed into information systems and have the aim to introduce a generic solution and be start point of better practical forecasting. The proposal is to promote the adequacy of the information system to the real needs of an enterprise in order to enable it to obtain and accompany better results, minimizing design errors, time, money and effort. The enterprise processes modeling are obtained with the usage of CIMOSA language and to the support information system it was used the UML language.
Resumo:
This paper presents the work in progress of an on-demand software deployment system based on application virtualization concepts which eliminates the need of software installation and configuration on each computer. Some mechanisms were created, such as mapping of utilization of resources by the application to improve the software distribution and startup; a virtualization middleware which give all resources needed for the software execution; an asynchronous P2P transport used to optimizing distribution on the network; and off-line support where the user can execute the application even when the server is not available or when is out of the network. © Springer-Verlag Berlin Heidelberg 2010.
Resumo:
In non-extensive statistical mechanics [14], it is a nonsense statement to say that the entropy of a system is extensive (or not), without mentioning a law of composition of its elements. In this theory quantum correlations might be perceived through quantum information process. This article, that is an extension of recent work [4], is a comparative study between the entropies of Von Neumann and of Tsallis, with some implementations of the effect of entropy in quantum entanglement, important as a process for transmission of quantum information. We consider two factorized (Fock number) states, which interact through a beam splitter bilinear Hamiltonian with two entries. This comparison showed us that the entropies of Tsallis and Von Neumann behave differently depending on the reflectance of the beam splitter. © 2011 Academic Publications.
Resumo:
The term human factor is used by professionals of various fields meant for understanding the behavior of human beings at work. The human being, while developing a cooperative activity with a computer system, is subject to cause an undesirable situation in his/her task. This paper starts from the principle that human errors may be considered as a cause or factor contributing to a series of accidents and incidents in many diversified fields in which human beings interact with automated systems. We propose a simulator of performance in error with potentiality to assist the Human Computer Interaction (HCI) project manager in the construction of the critical systems. © 2011 Springer-Verlag.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The technologies are rapidly developing, but some of them present in the computers, as for instance their processing capacity, are reaching their physical limits. It is up to quantum computation offer solutions to these limitations and issues that may arise. In the field of information security, encryption is of paramount importance, being then the development of quantum methods instead of the classics, given the computational power offered by quantum computing. In the quantum world, the physical states are interrelated, thus occurring phenomenon called entanglement. This study presents both a theoretical essay on the merits of quantum mechanics, computing, information, cryptography and quantum entropy, and some simulations, implementing in C language the effects of entropy of entanglement of photons in a data transmission, using Von Neumann entropy and Tsallis entropy.
Resumo:
Recognizing the importance of developing the information and media literacy in contemporary society, this article discusses the scenario of hybrid languages born on cyberspace, hypertext and the new types of readers who interact with information, now disseminated by digital media. As theoretical references authors such as Lucia Santaella, we develop arguments that present the headquarters of language and thought within a semiotic support. Matrices help to understand the phenomenon of hybrid language in hypertext. Therefore, we emphasize that the development of skills associated with the new reading environments propitiated by the virtual environment requires the understanding of language, which is one of the concepts used within the media and information literacy (MIL) proposal of Unesco, which combines the two concepts and skills of information and media literacy. It is hoped that this article motivate scholars and practitioners of education and information to take responsibility to educate for information environments that arise with new media and technologies, emphasizing the issue of reading and language.
Resumo:
We consider the management branch model where the random resources of the subsystem are given by the exponential distributions. The determinate equivalent is a block structure problem of quadratic programming. It is solved effectively by means of the decomposition method, which is based on iterative aggregation. The aggregation problem of the upper level is resolved analytically. This overcomes all difficulties concerning the large dimension of the main problem.