981 resultados para Visualização da informação


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information is one of the most valuable organization s assets, mainly on a global and highly competitive world. On this scenery there are two antagonists forces: on one side, organizations struggle for keeping protected its information, specially those considered as strategic, on the other side, the invaders, leaded by innumerous reasons - such as hobby, challenge or one single protest with the intention of capturing and corrupting the information of other organizations. This thesis presents the descriptive results of one research that had as its main objective to identify which variables influence the Executives´ and CIOs´ perceptions toward Information Security. In addition, the research also identified the profile of Rio Grande do Norte s organizations and its Executives/CIOs concerning Information Security, computed the level of agreement of the respondents according to NBR ISO/IEC 17799 (Information technology Code of practice for information security management) on its dimension Access Control. The research was based on a model, which took into account the following variables: origin of the organization s capital, sector of production, number of PCs networked, number of employees with rights to network, number of attacks suffered by the organizations, respondent´s positions, education level, literacy on Information Technology and specific training on network. In the goal´s point of view, the research was classified as exploratory and descriptive, and, in relation of the approach, quantitative. One questionnaire was applied on 33 Executives and CIOs of the 50 Rio Grande do Norte s organizations that collected the highest taxes of ICMS - Imposto sobre Circulação de Mercadorias on 2000. After the data collecting, cluster analysis and chi-square statistical tools were used for data analysis. The research made clear that the Executives and CIOs of Rio Grande do Norte s organizations have low level of agreement concerning the rules of the NBR ISO/IEC 17799. It also made evident that the Executives and CIOs have its perception toward Information Security influenced by the number of PCs networked and by the number of attacks suffered by the organizations

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The electronic mail service is one of the most Internet services that grow in the corporate environment. This evolution is bringing several problems for the organizations, especially to information that circulates inside of the corporate net. The lack of correct orientation to the people, about the usage and the security importance of these resources, is leaving breaches and causing misusage and overuse of service, for example. In recent literature, it starts to coming out several ideas, which has helped to rganizations how to plain and how to implement the information security system to the electronic mail in computer environment. However, these ideas are still not placed in practice in many companies, public or private. This dissertation tries to demonstrate the results of a research that has like goal, identify the importance that user training has over the information security policy, through a case study inside of private superior education institute in this state. Besides, this work had by basic orientation the ISO/IEC 17799, which talk about People Security. This study was developed over a proposed model to this research, which looked for offer conditions to guide the institution studied, how to plan better a information security policy to the electronic mail. Also, this research has an exploratory and descreptive nature and your type, qualitative. Firstly, it was applied na questionary to the information technology manager, as better way to get some general data and to deepen the contact which still then, it was being kept through e-mail. Thereupon this first contact, eleven interviews were done with the same manager, beside one interview with twenty-four users, among employees e students. After that to collect and transcript the interviews, were review with the manager all informations given, to correct any mistakes and to update that informations, to then, start the data analyze. The research suggests that the institution has a pro attitude about the information security policy and the electronic mail usage. However, it was clear that answers have their perception about information security under a very inexperient way, derived of a planning lack in relation to training program capable to solve the problem

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study aims to identify, through the application of webometric indicators, which Post-Graduate Courses in Engineering recommended by the Coordination of Improvement of Higher Personnel Education (CAPES) in Brazil stand out in the web space, in relation to the communication process and dissemination of scientific information in the academic environment. For this, we analyzed the structures content of the sites, the use, through the conduct of investigations and searches, the quality of information available, as well as the structure of existent hypertexts in the sites of this universe of search. The tools and methodologies adopted for this study are: search engines (Google, Yahoo), Mapper software (Xenu Link Sleuth) and analysis software and visualization of networks (and Ucinet6 NetDraw). Webometric indicators are also used, such as size of the web sites, visibility, web impact factor, brightness and density of the network. These instruments provide a brief analysis and evaluation for this webometric study. Therefore, from the incursion of the literature used, it appears that there are many advantages of using this type of metric study in the so called Information Society. The obtained results could identify which postgraduate courses in engineering has a better availability of their information on the Web, as well to define which of these courses stands out in relation to the use of their information, which has been outstanding in respect to its impact factor and which offers a greater number of links that serve as a source of information for its users, contributing, in its turn, with the navigability of the same network. In summary, it is asserted that the webometric study presents promising results, which are able to achieve the proposed objectives, as well as identify the factors that contribute significantly to the good visualization of these sites in the network, thus helping the spread of information and scientific communication through the use of the Web.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Desde a criação da tecnologia e do seu uso pelas empresas, a relação custo e benefício nem sempre foi bem elucidada tanto para os responsáveis pela área de tecnologia quanto para a alta direção. Mas, apesar disto, cada vez mais as organizações investem maciçamente em tecnologia, esperando que esta seja a solução para diversos problemas. Por isto, esta questão tem se tornado crucial para o processo de tomada de decisões, visto que investimentos nesta área costumam ser dispendiosos e, na atual conjuntura, estas análises precisam ser extremamente criteriosas para que se miniminizem as possibilidades de insucesso dos projetos, principalmente numa economia estabilizada e de concorrência acirrada. Uma das alternativas que as empresas têm buscado para atingir o sucesso e correr menos riscos é a terceirização da área de TI. Partindo desta visão, a presente dissertação tem por objetivo realizar uma investigação sobre a terceirização dos serviços de TI em todos os seus aspectos, isto é, desde a sua motivação, serviços efetivamente terceirizados, vantagens, desvantagens e possíveis obstáculos, a visão do alinhamento estratégico da TI, os processos de gestão de contratos e formas de controle e, por fim, tendências futuras. Trata-se de uma pesquisa de múltiplos casos, envolvendo franquias do Sistema Coca-Cola no Brasil. O estudo apresenta uma pesquisa bibliográfica sobre o processo de tomada de decisão empresarial, a análise de investimentos, a gestão e a terceirização da TI, o que permitem definir as dimensões de análise da pesquisa. Na pesquisa de campo foram entrevistados os gerentes da área de TI, nas cidades de Brasília-DF, Goiânia-GO e Ribeirão Preto-SP. A pesquisa de campo permitiu identificar como as mesmas avaliam seus investimentos em TI, como esta área é gerenciada, o que as levou a optar pela terceirização e como os processos terceirizados afetam a organização. Por se tratar de uma pesquisa qualitativa, optou-se por analisar comparativamente as três organizações. Com a realização deste estudo, obtiveram-se, como principais resultados, que as organizações estão utilizando a terceirização em TI para focar no negócio principal e, mesmo encontrando diversas desvantagens, inclusive com relação a custos, acreditam que os benefícios justificam. Ainda identificaram-se alguns obstáculos internos para a terceirização, principalmente quanto ao receio de se perder a inteligência do negócio. O acompanhamento dessas atividades terceirizadas é realizado pela equipe interna e por critérios estruturados, onde se verificam os níveis de serviço

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research aims at to contribute to show the consolidation of the area of Information Systems (IS) as area of knowledge in Production Engineering. For this, it according to presents a scenery of the publication in IS in the field of the Production Engineering in Brazil amount of articles, the authorship profile, the methodologies, the citations, the research thematic and the continuity of the research thematic. The base for this study was the works published in the National Meeting of Production Engineering - ENEGEP of years 2000, 2001, 2002, 2003 and 2004, inside of the area of Information Systems. Classified as bibliographical research, of applied nature, quantitative boarding, of the point of view of the objectives description-exploration was called and for the collection of data its comment was systematic with bibliographical survey. As field research, the method of collection of data if constituted of the elaboration of an analysis protocol and, to arrive itself at the final diagnosis, it operation the data through the statistical method, with the accomplishment of descriptive analyses. It approached concepts of IS and the seek areas and, it studied research correlate in Production Engineering, in Information Systems, in Information Science and other areas of the knowledge. How much to the results one concluded that the national and international contents are compatible and that the area of IS is in constant evolution. For the continuity of research lines it was observed that the majority of the authors was faithful to the area of Systems of Information. Amongst other found results, some institutions must try to increase its volume of publications and research, while others must look for to keep its reached mark already in the last years

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to the current need of the industry to integrate data of the beginning of production originating from of several sources and of transforming them in useful information for sockets of decisions, a search exists every time larger for systems of visualization of information that come to collaborate with that functionality. On the other hand, a common practice nowadays, due to the high competitiveness of the market, it is the development of industrial systems that possess characteristics of modularity, distribution, flexibility, scalability, adaptation, interoperability, reusability and access through web. Those characteristics provide an extra agility and a larger easiness in adapting to the frequent changes of demand of the market. Based on the arguments exposed above, this work consists of specifying a component-based architecture, with the respective development of a system based on that architecture, for the visualization of industrial data. The system was conceived to be capable to supply on-line information and, optionally, historical information of variables originating from of the beginning of production. In this work it is shown that the component-based architecture developed possesses the necessary requirements for the obtaining of a system robust, reliable and of easy maintenance, being, like this, in agreement with the industrial needs. The use of that architecture allows although components can be added, removed or updated in time of execution, through a manager of components through web, still activating more the adaptation process and updating of the system

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents a cooperative navigation systemof a humanoid robot and a wheeled robot using visual information, aiming to navigate the non-instrumented humanoid robot using information obtained from the instrumented wheeled robot. Despite the humanoid not having sensors to its navigation, it can be remotely controlled by infra-red signals. Thus, the wheeled robot can control the humanoid positioning itself behind him and, through visual information, find it and navigate it. The location of the wheeled robot is obtained merging information from odometers and from landmarks detection, using the Extended Kalman Filter. The marks are visually detected, and their features are extracted by image processing. Parameters obtained by image processing are directly used in the Extended Kalman Filter. Thus, while the wheeled robot locates and navigates the humanoid, it also simultaneously calculates its own location and maps the environment (SLAM). The navigation is done through heuristic algorithms based on errors between the actual and desired pose for each robot. The main contribution of this work was the implementation of a cooperative navigation system for two robots based on visual information, which can be extended to other robotic applications, as the ability to control robots without interfering on its hardware, or attaching communication devices

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Currently, one of the biggest challenges for the field of data mining is to perform cluster analysis on complex data. Several techniques have been proposed but, in general, they can only achieve good results within specific areas providing no consensus of what would be the best way to group this kind of data. In general, these techniques fail due to non-realistic assumptions about the true probability distribution of the data. Based on this, this thesis proposes a new measure based on Cross Information Potential that uses representative points of the dataset and statistics extracted directly from data to measure the interaction between groups. The proposed approach allows us to use all advantages of this information-theoretic descriptor and solves the limitations imposed on it by its own nature. From this, two cost functions and three algorithms have been proposed to perform cluster analysis. As the use of Information Theory captures the relationship between different patterns, regardless of assumptions about the nature of this relationship, the proposed approach was able to achieve a better performance than the main algorithms in literature. These results apply to the context of synthetic data designed to test the algorithms in specific situations and to real data extracted from problems of different fields

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Este trabalho apresenta um levantamento dos problemas associados à influência da observabilidade e da visualização radial no projeto de sistemas de monitoramento para redes de grande magnitude e complexidade. Além disso, se propõe a apresentar soluções para parte desses problemas. Através da utilização da Teoria de Redes Complexas, são abordadas duas questões: (i) a localização e a quantidade de nós necessários para garantir uma aquisição de dados capaz de representar o estado da rede de forma efetiva e (ii) a elaboração de um modelo de visualização das informações da rede capaz de ampliar a capacidade de inferência e de entendimento de suas propriedades. A tese estabelece limites teóricos a estas questões e apresenta um estudo sobre a complexidade do monitoramento eficaz, eficiente e escalável de redes

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We revisit the problem of visibility, which is to determine a set of primitives potentially visible in a set of geometry data represented by a data structure, such as a mesh of polygons or triangles, we propose a solution for speeding up the three-dimensional visualization processing in applications. We introduce a lean structure , in the sense of data abstraction and reduction, which can be used for online and interactive applications. The visibility problem is especially important in 3D visualization of scenes represented by large volumes of data, when it is not worthwhile keeping all polygons of the scene in memory. This implies a greater time spent in the rendering, or is even impossible to keep them all in huge volumes of data. In these cases, given a position and a direction of view, the main objective is to determine and load a minimum ammount of primitives (polygons) in the scene, to accelerate the rendering step. For this purpose, our algorithm performs cutting primitives (culling) using a hybrid paradigm based on three known techniques. The scene is divided into a cell grid, for each cell we associate the primitives that belong to them, and finally determined the set of primitives potentially visible. The novelty is the use of triangulation Ja 1 to create the subdivision grid. We chose this structure because of its relevant characteristics of adaptivity and algebrism (ease of calculations). The results show a substantial improvement over traditional methods when applied separately. The method introduced in this work can be used in devices with low or no dedicated processing power CPU, and also can be used to view data via the Internet, such as virtual museums applications

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Equipment maintenance is the major cost factor in industrial plants, it is very important the development of fault predict techniques. Three-phase induction motors are key electrical equipments used in industrial applications mainly because presents low cost and large robustness, however, it isn t protected from other fault types such as shorted winding and broken bars. Several acquisition ways, processing and signal analysis are applied to improve its diagnosis. More efficient techniques use current sensors and its signature analysis. In this dissertation, starting of these sensors, it is to make signal analysis through Park s vector that provides a good visualization capability. Faults data acquisition is an arduous task; in this way, it is developed a methodology for data base construction. Park s transformer is applied into stationary reference for machine modeling of the machine s differential equations solution. Faults detection needs a detailed analysis of variables and its influences that becomes the diagnosis more complex. The tasks of pattern recognition allow that systems are automatically generated, based in patterns and data concepts, in the majority cases undetectable for specialists, helping decision tasks. Classifiers algorithms with diverse learning paradigms: k-Neighborhood, Neural Networks, Decision Trees and Naïves Bayes are used to patterns recognition of machines faults. Multi-classifier systems are used to improve classification errors. It inspected the algorithms homogeneous: Bagging and Boosting and heterogeneous: Vote, Stacking and Stacking C. Results present the effectiveness of constructed model to faults modeling, such as the possibility of using multi-classifiers algorithm on faults classification

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The sharing of knowledge and integration of data is one of the biggest challenges in health and essential contribution to improve the quality of health care. Since the same person receives care in various health facilities throughout his/her live, that information is distributed in different information systems which run on platforms of heterogeneous hardware and software. This paper proposes a System of Health Information Based on Ontologies (SISOnt) for knowledge sharing and integration of data on health, which allows to infer new information from the heterogeneous databases and knowledge base. For this purpose it was created three ontologies represented by the patterns and concepts proposed by the Semantic Web. The first ontology provides a representation of the concepts of diseases Secretariat of Health Surveillance (SVS) and the others are related to the representation of the concepts of databases of Health Information Systems (SIS), specifically the Information System of Notification of Diseases (SINAN) and the Information System on Mortality (SIM)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The last years have presented an increase in the acceptance and adoption of the parallel processing, as much for scientific computation of high performance as for applications of general intention. This acceptance has been favored mainly for the development of environments with massive parallel processing (MPP - Massively Parallel Processing) and of the distributed computation. A common point between distributed systems and MPPs architectures is the notion of message exchange, that allows the communication between processes. An environment of message exchange consists basically of a communication library that, acting as an extension of the programming languages that allow to the elaboration of applications parallel, such as C, C++ and Fortran. In the development of applications parallel, a basic aspect is on to the analysis of performance of the same ones. Several can be the metric ones used in this analysis: time of execution, efficiency in the use of the processing elements, scalability of the application with respect to the increase in the number of processors or to the increase of the instance of the treat problem. The establishment of models or mechanisms that allow this analysis can be a task sufficiently complicated considering parameters and involved degrees of freedom in the implementation of the parallel application. An joined alternative has been the use of collection tools and visualization of performance data, that allow the user to identify to points of strangulation and sources of inefficiency in an application. For an efficient visualization one becomes necessary to identify and to collect given relative to the execution of the application, stage this called instrumentation. In this work it is presented, initially, a study of the main techniques used in the collection of the performance data, and after that a detailed analysis of the main available tools is made that can be used in architectures parallel of the type to cluster Beowulf with Linux on X86 platform being used libraries of communication based in applications MPI - Message Passing Interface, such as LAM and MPICH. This analysis is validated on applications parallel bars that deal with the problems of the training of neural nets of the type perceptrons using retro-propagation. The gotten conclusions show to the potentiality and easinesses of the analyzed tools.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Self-organizing maps (SOM) are artificial neural networks widely used in the data mining field, mainly because they constitute a dimensionality reduction technique given the fixed grid of neurons associated with the network. In order to properly the partition and visualize the SOM network, the various methods available in the literature must be applied in a post-processing stage, that consists of inferring, through its neurons, relevant characteristics of the data set. In general, such processing applied to the network neurons, instead of the entire database, reduces the computational costs due to vector quantization. This work proposes a post-processing of the SOM neurons in the input and output spaces, combining visualization techniques with algorithms based on gravitational forces and the search for the shortest path with the greatest reward. Such methods take into account the connection strength between neighbouring neurons and characteristics of pattern density and distances among neurons, both associated with the position that the neurons occupy in the data space after training the network. Thus, the goal consists of defining more clearly the arrangement of the clusters present in the data. Experiments were carried out so as to evaluate the proposed methods using various artificially generated data sets, as well as real world data sets. The results obtained were compared with those from a number of well-known methods existent in the literature

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Apesar do reconhecimento da importância dos conhecimentos geográficos e do uso das ferramentas de análise espacial nos estudos da saúde coletiva, esse é um campo ainda pouco explorado pelos pesquisadores brasileiros. em levantamento realizado nas principais revistas científicas que veiculam os resultados de pesquisa em saúde do trabalhador, verificou-se o grande predomínio do uso de tabelas e gráficos como meio de organizar e apresentar os resultados obtidos, e o número reduzido de mapas. Para isso foram examinados todos os artigos publicados em quatro periódicos (Revista de Saúde Pública, Cadernos de Saúde Pública, Revista Saúde e Sociedade e Revista Brasileira de Epidemiologia) no período de 1967 a 2009. Uma vez analisado o conjunto de artigos selecionados no estudo, aqueles que utilizaram representações cartográficas receberam atenção especial. Verificou-se que, embora ainda pouco utilizadas, as ferramentas do geoprocessamento e da geoestatística com suporte em SIG abrem um campo de novas possibilidades no uso da cartografia temática em saúde do trabalhador no Brasil. Contudo, recomenda-se para os editores das revistas científicas o detalhamento de normas técnicas para publicação de figuras cartográficas, assim como a elaboração de pareceres específicos que possam auxiliar os autores em vista das modificações necessárias para a melhoria da qualidade da comunicação visual de mapas e da correlação espacial por meio do tratamento cartográfico.