17 resultados para 350202 Business Information Systems (incl. Data Processing)
em Repositório Institucional UNESP - Universidade Estadual Paulista "Julio de Mesquita Filho"
Resumo:
Managing the great complexity of enterprise system, due to entities numbers, decision and process varieties involved to be controlled results in a very hard task because deals with the integration of its operations and its information systems. Moreover, the enterprises find themselves in a constant changing process, reacting in a dynamic and competitive environment where their business processes are constantly altered. The transformation of business processes into models allows to analyze and redefine them. Through computing tools usage it is possible to minimize the cost and risks of an enterprise integration design. This article claims for the necessity of modeling the processes in order to define more precisely the enterprise business requirements and the adequate usage of the modeling methodologies. Following these patterns, the paper concerns the process modeling relative to the domain of demand forecasting as a practical example. The domain of demand forecasting was built based on a theoretical review. The resulting models considered as reference model are transformed into information systems and have the aim to introduce a generic solution and be start point of better practical forecasting. The proposal is to promote the adequacy of the information system to the real needs of an enterprise in order to enable it to obtain and accompany better results, minimizing design errors, time, money and effort. The enterprise processes modeling are obtained with the usage of CIMOSA language and to the support information system it was used the UML language.
Resumo:
This paper presents an Advanced Traveler Information System (ATIS) developed on Android platform, which is open source and free. The developed application has as its main objective the free use of a Vehicle-to- Infrastructure (V2I) communication through the wireless network access points available in urban centers. In addition to providing the necessary information for an Intelligent Transportation System (ITS) to a central server, the application also receives the traffic data close to the vehicle. Once obtained this traffic information, the application displays them to the driver in a clear and efficient way, allowing the user to make decisions about his route in real time. The application was tested in a real environment and the results are presented in the article. In conclusion we present the benefits of this application. © 2012 IEEE.
Resumo:
Pode-se afirmar que a evolução tecnológica (desenvolvimento de novos instrumentos de medição como, softwares, satélites e computadores, bem como, o barateamento das mídias de armazenamento) permite às Organizações produzirem e adquirirem grande quantidade de dados em curto espaço de tempo. Devido ao volume de dados, Organizações de pesquisa se tornam potencialmente vulneráveis aos impactos da explosão de informações. Uma solução adotada por algumas Organizações é a utilização de ferramentas de sistemas de informação para auxiliar na documentação, recuperação e análise dos dados. No âmbito científico, essas ferramentas são desenvolvidas para armazenar diferentes padrões de metadados (dados sobre dados). Durante o processo de desenvolvimento destas ferramentas, destaca-se a adoção de padrões como a Linguagem Unificada de Modelagem (UML, do Inglês Unified Modeling Language), cujos diagramas auxiliam na modelagem de diferentes aspectos do software. O objetivo deste estudo é apresentar uma ferramenta de sistemas de informação para auxiliar na documentação dos dados das Organizações por meio de metadados e destacar o processo de modelagem de software, por meio da UML. Será abordado o Padrão de Metadados Digitais Geoespaciais, amplamente utilizado na catalogação de dados por Organizações científicas de todo mundo, e os diagramas dinâmicos e estáticos da UML como casos de uso, sequências e classes. O desenvolvimento das ferramentas de sistemas de informação pode ser uma forma de promover a organização e a divulgação de dados científicos. No entanto, o processo de modelagem requer especial atenção para o desenvolvimento de interfaces que estimularão o uso das ferramentas de sistemas de informação.
Resumo:
The purpose of this work was to study fragmentation of forest formations (mesophytic forest, riparian woodland and savannah vegetation (cerrado)) in a 15,774-ha study area located in the Municipal District of Botucatu in Southeastern Brazil (São Paulo State). A land use and land cover map was made from a color composition of a Landsat-5 thematic mapper (TM) image. The edge effect caused by habitat fragmentation was assessed by overlaying, on a geographic information system (GIS), the land use and land cover data with the spectral ratio. The degree of habitat fragmentation was analyzed by deriving: 1. mean patch area and perimeter; 2. patch number and density; 3. perimeter-area ratio, fractal dimension (D), and shape diversity index (SI); and 4. distance between patches and dispersion index (R). In addition, the following relationships were modeled: 1. distribution of natural vegetation patch sizes; 2. perimeter-area relationship and the number and area of natural vegetation patches; 3. edge effect caused by habitat fragmentation, the values of R indicated that savannah patches (R = 0.86) were aggregated while patches of natural vegetation as a whole (R = 1.02) were randomly dispersed in the landscape. There was a high frequency of small patches in the landscape whereas large patches were rare. In the perimeter-area relationship, there was no sign of scale distinction in the patch shapes, In the patch number-landscape area relationship, D, though apparently scale-dependent, tends to be constant as area increases. This phenomenon was correlated with the tendency to reach a constant density as the working scale was increased, on the edge effect analysis, the edge-center distance was properly estimated by a model in which the edge-center distance was considered a function of the to;al patch area and the SI. (C) 1997 Elsevier B.V. B.V.
Resumo:
The CMS Collaboration conducted a month-long data taking exercise, the Cosmic Run At Four Tesla, during October-November 2008, with the goal of commissioning the experiment for extended operation. With all installed detector systems participating, CMS recorded 270 million cosmic ray events with the solenoid at a magnetic field strength of 3.8 T. This paper describes the data flow from the detector through the various online and offline computing systems, as well as the workflows used for recording the data, for aligning and calibrating the detector, and for analysis of the data. © 2010 IOP Publishing Ltd and SISSA.
Resumo:
Background: The genome-wide identification of both morbid genes, i.e., those genes whose mutations cause hereditary human diseases, and druggable genes, i.e., genes coding for proteins whose modulation by small molecules elicits phenotypic effects, requires experimental approaches that are time-consuming and laborious. Thus, a computational approach which could accurately predict such genes on a genome-wide scale would be invaluable for accelerating the pace of discovery of causal relationships between genes and diseases as well as the determination of druggability of gene products.Results: In this paper we propose a machine learning-based computational approach to predict morbid and druggable genes on a genome-wide scale. For this purpose, we constructed a decision tree-based meta-classifier and trained it on datasets containing, for each morbid and druggable gene, network topological features, tissue expression profile and subcellular localization data as learning attributes. This meta-classifier correctly recovered 65% of known morbid genes with a precision of 66% and correctly recovered 78% of known druggable genes with a precision of 75%. It was than used to assign morbidity and druggability scores to genes not known to be morbid and druggable and we showed a good match between these scores and literature data. Finally, we generated decision trees by training the J48 algorithm on the morbidity and druggability datasets to discover cellular rules for morbidity and druggability and, among the rules, we found that the number of regulating transcription factors and plasma membrane localization are the most important factors to morbidity and druggability, respectively.Conclusions: We were able to demonstrate that network topological features along with tissue expression profile and subcellular localization can reliably predict human morbid and druggable genes on a genome-wide scale. Moreover, by constructing decision trees based on these data, we could discover cellular rules governing morbidity and druggability.
Resumo:
MODSI is a multi-models tool for information systems modeling. A modeling process in MODSI can be driven according to three different approaches: informal, semi-formal and formal. The MODSI tool is therefore based on the linked usage of these three modeling approaches. It can be employed at two different levels: the meta-modeling of a method and the modeling of an information system.In this paper we start presenting different types of modeling by making an analysis of their particular features. Then, we introduce the meta-model defined in our tool, as well as the tool functional architecture. Finally, we describe and illustrate the various usage levels of this tool.
Resumo:
Nowadays, L1 SBAS signals can be used in a combined GPS+SBAS data processing. However, such situation restricts the studies over short baselines. Besides of increasing the satellite availability, SBAS satellites orbit configuration is different from that of GPS. In order to analyze how these characteristics can impact GPS positioning in the southeast area of Brazil, experiments involving GPS-only and combined GPS+SBAS data were performed. Solutions using single point and relative positioning were computed to show the impact over satellite geometry, positioning accuracy and short baseline ambiguity resolution. Results showed that the inclusion of SBAS satellites can improve the accuracy of positioning. Nevertheless, the bad quality of the data broadcasted by these satellites limits their usage. © Springer-Verlag Berlin Heidelberg 2012.
Resumo:
The contemporary world is characterized, among other factors, by the influence of the new computer information systems on the behavior of individuals. However, traditional information systems still have interaction problems with users. The aim of this study was to determine whether the interaction aspects between user versus traditional information systems (particularly the graphics) have been fully studied. To do so, the ergonomic aspects and usability of such systems were reviewed, with emphasis on the problems of visibility, legibility and readability. From that criteria, the evolution of ergonomic studies of information systems was reviewed (bibliometrics technique); and examples of ergonomic and usability problems in packaging were demonstrated (case study). The results confirm that traditional information systems still have problems of interaction between human X system, hindering the effective perception of information.
Resumo:
Background: Leptospirosis is an important zoonotic disease associated with poor areas of urban settings of developing countries and early diagnosis and prompt treatment may prevent disease. Although rodents are reportedly considered the main reservoirs of leptospirosis, dogs may develop the disease, may become asymptomatic carriers and may be used as sentinels for disease epidemiology. The use of Geographical Information Systems (GIS) combined with spatial analysis techniques allows the mapping of the disease and the identification and assessment of health risk factors. Besides the use of GIS and spatial analysis, the technique of data mining, decision tree, can provide a great potential to find a pattern in the behavior of the variables that determine the occurrence of leptospirosis. The objective of the present study was to apply Geographical Information Systems and data prospection (decision tree) to evaluate the risk factors for canine leptospirosis in an area of Curitiba, PR.Materials, Methods & Results: The present study was performed on the Vila Pantanal, a urban poor community in the city of Curitiba. A total of 287 dog blood samples were randomly obtained house-by-house in a two-day sampling on January 2010. In addition, a questionnaire was applied to owners at the time of sampling. Geographical coordinates related to each household of tested dog were obtained using a Global Positioning System (GPS) for mapping the spatial distribution of reagent and non-reagent dogs to leptospirosis. For the decision tree, risk factors included results of microagglutination test (MAT) from the serum of dogs, previous disease on the household, contact with rats or other dogs, dog breed, outdoors access, feeding, trash around house or backyard, open sewer proximity and flooding. A total of 189 samples (about 2/3 of overall samples) were randomly selected for the training file and consequent decision rules. The remained 98 samples were used for the testing file. The seroprevalence showed a pattern of spatial distribution that involved all the Pantanal area, without agglomeration of reagent animals. In relation to data mining, from 189 samples used in decision tree, a total of 165 (87.3%) animal samples were correctly classified, generating a Kappa index of 0.413. A total of 154 out of 159 (96.8%) samples were considered non-reagent and were correctly classified and only 5/159 (3.2%) were wrongly identified. on the other hand, only 11 (36.7%) reagent samples were correctly classified, with 19 (63.3%) samples failing diagnosis.Discussion: The spatial distribution that involved all the Pantanal area showed that all the animals in the area are at risk of contamination by Leptospira spp. Although most samples had been classified correctly by the decision tree, a degree of difficulty of separability related to seropositive animals was observed, with only 36.7% of the samples classified correctly. This can occur due to the fact of seronegative animals number is superior to the number of seropositive ones, taking the differences in the pattern of variable behavior. The data mining helped to evaluate the most important risk factors for leptospirosis in an urban poor community of Curitiba. The variables selected by decision tree reflected the important factors about the existence of the disease (default of sewer, presence of rats and rubbish and dogs with free access to street). The analyses showed the multifactorial character of the epidemiology of canine leptospirosis.
Resumo:
It was observed the metacognitive trajectory of three users from a public university library the state of Sao Paulo to identify the metacognitive strategies that were used during the process of searching for information in a online collective catalog. Its configuration was based on participants' oral reports obtained by the technique of individual verbal protocol to provide an actual configuration of the task ans a closer examination of its natural recovery of information systems. The data collection sessions were recorded, and the recorded data were transcribed and categorized into four units of analyses: 1- expertise, 2-regulation, 3-assessment of knowledge itself and 4- organization of cognitive process. The results showed that knowledge of meta-cognitive strategies affects the quality of the results obtained in the process of searching for information and interferes in the way it implements retrieval on-line catalog, a fact the emphasizes the need for studies to move forward on issues meta-cognition of users of information.
Resumo:
In geophysics and seismology, raw data need to be processed to generate useful information that can be turned into knowledge by researchers. The number of sensors that are acquiring raw data is increasing rapidly. Without good data management systems, more time can be spent in querying and preparing datasets for analyses than in acquiring raw data. Also, a lot of good quality data acquired at great effort can be lost forever if they are not correctly stored. Local and international cooperation will probably be reduced, and a lot of data will never become scientific knowledge. For this reason, the Seismological Laboratory of the Institute of Astronomy, Geophysics and Atmospheric Sciences at the University of São Paulo (IAG-USP) has concentrated fully on its data management system. This report describes the efforts of the IAG-USP to set up a seismology data management system to facilitate local and international cooperation. © 2011 by the Istituto Nazionale di Geofisica e Vulcanologia. All rights reserved.
Resumo:
Non-conventional database management systems are used to achieve a better performance when dealing with complex data. One fundamental concept of these systems is object identity (OID), because each object in the database has a unique identifier that is used to access and reference it in relationships to other objects. Two approaches can be used for the implementation of OIDs: physical or logical OIDs. In order to manage complex data, was proposed the Multimedia Data Manager Kernel (NuGeM) that uses a logical technique, named Indirect Mapping. This paper proposes an improvement to the technique used by NuGeM, whose original contribution is management of OIDs with a fewer number of disc accesses and less processing, thus reducing management time from the pages and eliminating the problem with exhaustion of OIDs. Also, the technique presented here can be applied to others OODBMSs. © 2011 IEEE.
Resumo:
The post-processing of association rules is a difficult task, since a huge number of rules that are generated are of no interest to the user. To overcome this problem many approaches have been developed, such as objective measures and clustering. However, objective measures don't reduce nor organize the collection of rules, therefore making the understanding of the domain difficult. On the other hand, clustering doesn't reduce the exploration space nor direct the user to find interesting knowledge, therefore making the search for relevant knowledge not so easy. In this context this paper presents the PAR-COM methodology that, by combining clustering and objective measures, reduces the association rule exploration space directing the user to what is potentially interesting. An experimental study demonstrates the potential of PAR-COM to minimize the user's effort during the post-processing process. © 2012 Springer-Verlag.
Resumo:
Digital data sets constitute rich sources of information, which can be extracted and evaluated applying computational tools, for example, those ones for Information Visualization. Web-based applications, such as social network environments, forums and virtual environments for Distance Learning, are good examples for such sources. The great amount of data has direct impact on processing and analysis tasks. This paper presents the computational tool Mapper, defined and implemented to use visual representations - maps, graphics and diagrams - for supporting the decision making process by analyzing data stored in Virtual Learning Environment TelEduc-Unesp. © 2012 IEEE.