862 resultados para Illinois. Data Information Systems Commission.
Resumo:
Conceptual Information Systems provide a multi-dimensional conceptually structured view on data stored in relational databases. On restricting the expressiveness of the retrieval language, they allow the visualization of sets of realted queries in conceptual hierarchies, hence supporting the search of something one does not have a precise description, but only a vague idea of. Information Retrieval is considered as the process of finding specific objects (documents etc.) out of a large set of objects which fit to some description. In some data analysis and knowledge discovery applications, the dual task is of interest: The analyst needs to determine, for a subset of objects, a description for this subset. In this paper we discuss how Conceptual Information Systems can be extended to support also the second task.
Resumo:
Conceptual Information Systems are based on a formalization of the concept of "concept" as it is discussed in traditional philosophical logic. This formalization supports a human-centered approach to the development of Information Systems. We discuss this approach by means of an implemented Conceptual Information System for supporting IT security management in companies and organizations.
Resumo:
In database marketing, the behavior of customers is analyzed by studying the transactions they have performed. In order to get a global picture of the behavior of a customer, his single transactions have to be composed together. In On-Line Analytical Processing, this operation is known as reverse pivoting. With the ongoing data analysis process, reverse pivoting has to be repeated several times, usually requiring an implementation in SQL. In this paper, we present a construction for conceptual scales for reverse pivoting in Conceptual Information Systems, and also discuss the visualization. The construction allows the reuse of previously created queries without reprogramming and offers a visualization of the results by line diagrams.
Resumo:
Our ability to identify, acquire, store, enquire on and analyse data is increasing as never before, especially in the GIS field. Technologies are becoming available to manage a wider variety of data and to make intelligent inferences on that data. The mainstream arrival of large-scale database engines is not far away. The experience of using the first such products tells us that they will radically change data management in the GIS field.
Resumo:
Studies on learning management systems have largely been technical in nature with an emphasis on the evaluation of the human computer interaction (HCI) processes in using the LMS. This paper reports a study that evaluates the information interaction processes on an eLearning course used in teaching an applied Statistics course. The eLearning course is used as a synonym for information systems. The study explores issues of missing context in stored information in information systems. Using the semiotic framework as a guide, the researchers evaluated an existing eLearning course with the view to proposing a model for designing improved eLearning courses for future eLearning programmes. In this exploratory study, a survey questionnaire is used to collect data from 160 participants on an eLearning course in Statistics in Applied Climatology. The views of the participants are analysed with a focus on only the human information interaction issues. Using the semiotic framework as a guide, syntactic, semantic, pragmatic and social context gaps or problems were identified. The information interactions problems identified include ambiguous instructions, inadequate information, lack of sound, interface design problems among others. These problems affected the quality of new knowledge created by the participants. The researchers thus highlighted the challenges of missing information context when data is stored in an information system. The study concludes by proposing a human information interaction model for improving the information interaction quality issues in the design of eLearning course on learning management platforms and those other information systems.
Resumo:
Develop software is still a risky business. After 60 years of experience, this community is still not able to consistently build Information Systems (IS) for organizations with predictable quality, within previously agreed budget and time constraints. Although software is changeable we are still unable to cope with the amount and complexity of change that organizations demand for their IS. To improve results, developers followed two alternatives: Frameworks that increase productivity but constrain the flexibility of possible solutions; Agile ways of developing software that keep flexibility with less upfront commitments. With strict frameworks, specific hacks have to be put in place to get around the framework construction options. In time this leads to inconsistent architectures that are harder to maintain due to incomplete documentation and human resources turnover. The main goals of this work is to create a new way to develop flexible IS for organizations, using web technologies, in a faster, better and cheaper way that is more suited to handle organizational change. To do so we propose an adaptive object model that uses a new ontology for data and action with strict normalizing rules. These rules should bound the effects of changes that can be better tested and therefore corrected. Interfaces are built with templates of resources that can be reused and extended in a flexible way. The “state of the world” for each IS is determined by all production and coordination acts that agents performed over time, even those performed by external systems. When bugs are found during maintenance, their past cascading effects can be checked through simulation, re-running the log of transaction acts over time and checking results with previous records. This work implements a prototype with part of the proposed system in order to have a preliminary assessment its feasibility and limitations.
Resumo:
Pode-se afirmar que a evolução tecnológica (desenvolvimento de novos instrumentos de medição como, softwares, satélites e computadores, bem como, o barateamento das mídias de armazenamento) permite às Organizações produzirem e adquirirem grande quantidade de dados em curto espaço de tempo. Devido ao volume de dados, Organizações de pesquisa se tornam potencialmente vulneráveis aos impactos da explosão de informações. Uma solução adotada por algumas Organizações é a utilização de ferramentas de sistemas de informação para auxiliar na documentação, recuperação e análise dos dados. No âmbito científico, essas ferramentas são desenvolvidas para armazenar diferentes padrões de metadados (dados sobre dados). Durante o processo de desenvolvimento destas ferramentas, destaca-se a adoção de padrões como a Linguagem Unificada de Modelagem (UML, do Inglês Unified Modeling Language), cujos diagramas auxiliam na modelagem de diferentes aspectos do software. O objetivo deste estudo é apresentar uma ferramenta de sistemas de informação para auxiliar na documentação dos dados das Organizações por meio de metadados e destacar o processo de modelagem de software, por meio da UML. Será abordado o Padrão de Metadados Digitais Geoespaciais, amplamente utilizado na catalogação de dados por Organizações científicas de todo mundo, e os diagramas dinâmicos e estáticos da UML como casos de uso, sequências e classes. O desenvolvimento das ferramentas de sistemas de informação pode ser uma forma de promover a organização e a divulgação de dados científicos. No entanto, o processo de modelagem requer especial atenção para o desenvolvimento de interfaces que estimularão o uso das ferramentas de sistemas de informação.
Resumo:
The purpose of this work was to study fragmentation of forest formations (mesophytic forest, riparian woodland and savannah vegetation (cerrado)) in a 15,774-ha study area located in the Municipal District of Botucatu in Southeastern Brazil (São Paulo State). A land use and land cover map was made from a color composition of a Landsat-5 thematic mapper (TM) image. The edge effect caused by habitat fragmentation was assessed by overlaying, on a geographic information system (GIS), the land use and land cover data with the spectral ratio. The degree of habitat fragmentation was analyzed by deriving: 1. mean patch area and perimeter; 2. patch number and density; 3. perimeter-area ratio, fractal dimension (D), and shape diversity index (SI); and 4. distance between patches and dispersion index (R). In addition, the following relationships were modeled: 1. distribution of natural vegetation patch sizes; 2. perimeter-area relationship and the number and area of natural vegetation patches; 3. edge effect caused by habitat fragmentation, the values of R indicated that savannah patches (R = 0.86) were aggregated while patches of natural vegetation as a whole (R = 1.02) were randomly dispersed in the landscape. There was a high frequency of small patches in the landscape whereas large patches were rare. In the perimeter-area relationship, there was no sign of scale distinction in the patch shapes, In the patch number-landscape area relationship, D, though apparently scale-dependent, tends to be constant as area increases. This phenomenon was correlated with the tendency to reach a constant density as the working scale was increased, on the edge effect analysis, the edge-center distance was properly estimated by a model in which the edge-center distance was considered a function of the to;al patch area and the SI. (C) 1997 Elsevier B.V. B.V.
Resumo:
Abstract Background Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. Results This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. Conclusions This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.
Resumo:
In den letzten drei Jahrzehnten sind Fernerkundung und GIS in den Geowissenschaften zunehmend wichtiger geworden, um die konventionellen Methoden von Datensammlung und zur Herstellung von Landkarten zu verbessern. Die vorliegende Arbeit befasst sich mit der Anwendung von Fernerkundung und geographischen Informationssystemen (GIS) für geomorphologische Untersuchungen. Durch die Kombination beider Techniken ist es vor allem möglich geworden, geomorphologische Formen im Überblick und dennoch detailliert zu erfassen. Als Grundlagen werden in dieser Arbeit topographische und geologische Karten, Satellitenbilder und Klimadaten benutzt. Die Arbeit besteht aus 6 Kapiteln. Das erste Kapitel gibt einen allgemeinen Überblick über den Untersuchungsraum. Dieser umfasst folgende morphologische Einheiten, klimatischen Verhältnisse, insbesondere die Ariditätsindizes der Küsten- und Gebirgslandschaft sowie das Siedlungsmuster beschrieben. Kapitel 2 befasst sich mit der regionalen Geologie und Stratigraphie des Untersuchungsraumes. Es wird versucht, die Hauptformationen mit Hilfe von ETM-Satellitenbildern zu identifizieren. Angewandt werden hierzu folgende Methoden: Colour Band Composite, Image Rationing und die sog. überwachte Klassifikation. Kapitel 3 enthält eine Beschreibung der strukturell bedingten Oberflächenformen, um die Wechselwirkung zwischen Tektonik und geomorphologischen Prozessen aufzuklären. Es geht es um die vielfältigen Methoden, zum Beispiel das sog. Image Processing, um die im Gebirgskörper vorhandenen Lineamente einwandfrei zu deuten. Spezielle Filtermethoden werden angewandt, um die wichtigsten Lineamente zu kartieren. Kapitel 4 stellt den Versuch dar, mit Hilfe von aufbereiteten SRTM-Satellitenbildern eine automatisierte Erfassung des Gewässernetzes. Es wird ausführlich diskutiert, inwieweit bei diesen Arbeitsschritten die Qualität kleinmaßstäbiger SRTM-Satellitenbilder mit großmaßstäbigen topographischen Karten vergleichbar ist. Weiterhin werden hydrologische Parameter über eine qualitative und quantitative Analyse des Abflussregimes einzelner Wadis erfasst. Der Ursprung von Entwässerungssystemen wird auf der Basis geomorphologischer und geologischer Befunde interpretiert. Kapitel 5 befasst sich mit der Abschätzung der Gefahr episodischer Wadifluten. Die Wahrscheinlichkeit ihres jährlichen Auftretens bzw. des Auftretens starker Fluten im Abstand mehrerer Jahre wird in einer historischen Betrachtung bis 1921 zurückverfolgt. Die Bedeutung von Regentiefs, die sich über dem Roten Meer entwickeln, und die für eine Abflussbildung in Frage kommen, wird mit Hilfe der IDW-Methode (Inverse Distance Weighted) untersucht. Betrachtet werden außerdem weitere, regenbringende Wetterlagen mit Hilfe von Meteosat Infrarotbildern. Genauer betrachtet wird die Periode 1990-1997, in der kräftige, Wadifluten auslösende Regenfälle auftraten. Flutereignisse und Fluthöhe werden anhand von hydrographischen Daten (Pegelmessungen) ermittelt. Auch die Landnutzung und Siedlungsstruktur im Einzugsgebiet eines Wadis wird berücksichtigt. In Kapitel 6 geht es um die unterschiedlichen Küstenformen auf der Westseite des Roten Meeres zum Beispiel die Erosionsformen, Aufbauformen, untergetauchte Formen. Im abschließenden Teil geht es um die Stratigraphie und zeitliche Zuordnung von submarinen Terrassen auf Korallenriffen sowie den Vergleich mit anderen solcher Terrassen an der ägyptischen Rotmeerküste westlich und östlich der Sinai-Halbinsel.
Resumo:
A post classification change detection technique based on a hybrid classification approach (unsupervised and supervised) was applied to Landsat Thematic Mapper (TM), Landsat Enhanced Thematic Plus (ETM+), and ASTER images acquired in 1987, 2000 and 2004 respectively to map land use/cover changes in the Pic Macaya National Park in the southern region of Haiti. Each image was classified individually into six land use/cover classes: built-up, agriculture, herbaceous, open pine forest, mixed forest, and barren land using unsupervised ISODATA and maximum likelihood supervised classifiers with the aid of field collected ground truth data collected in the field. Ground truth information, collected in the field in December 2007, and including equalized stratified random points which were visual interpreted were used to assess the accuracy of the classification results. The overall accuracy of the land classification for each image was respectively: 1987 (82%), 2000 (82%), 2004 (87%). A post classification change detection technique was used to produce change images for 1987 to 2000, 1987 to 2004, and 2000 to 2004. It was found that significant changes in the land use/cover occurred over the 17- year period. The results showed increases in built up (from 10% to 17%) and herbaceous (from 5% to 14%) areas between 1987 and 2004. The increase of herbaceous was mostly caused by the abandonment of exhausted agriculture lands. At the same time, open pine forest and mixed forest areas lost (75%) and (83%) of their area to other land use/cover types. Open pine forest (from 20% to 14%) and mixed forest (from18 to 12%) were transformed into agriculture area or barren land. This study illustrated the continuing deforestation, land degradation and soil erosion in the region, which in turn is leading to decrease in vegetative cover. The study also showed the importance of Remote Sensing (RS) and Geographic Information System (GIS) technologies to estimate timely changes in the land use/cover, and to evaluate their causes in order to design an ecological based management plan for the park.
Resumo:
Cloud Computing is an enabler for delivering large-scale, distributed enterprise applications with strict requirements in terms of performance. It is often the case that such applications have complex scaling and Service Level Agreement (SLA) management requirements. In this paper we present a simulation approach for validating and comparing SLA-aware scaling policies using the CloudSim simulator, using data from an actual Distributed Enterprise Information System (dEIS). We extend CloudSim with concurrent and multi-tenant task simulation capabilities. We then show how different scaling policies can be used for simulating multiple dEIS applications. We present multiple experiments depicting the impact of VM scaling on both datacenter energy consumption and dEIS performance indicators.
Resumo:
Purpose. To examine the association between living in proximity to Toxics Release Inventory (TRI) facilities and the incidence of childhood cancer in the State of Texas. ^ Design. This is a secondary data analysis utilizing the publicly available Toxics release inventory (TRI), maintained by the U.S. Environmental protection agency that lists the facilities that release any of the 650 TRI chemicals. Total childhood cancer cases and childhood cancer rate (age 0-14 years) by county, for the years 1995-2003 were used from the Texas cancer registry, available at the Texas department of State Health Services website. Setting: This study was limited to the children population of the State of Texas. ^ Method. Analysis was done using Stata version 9 and SPSS version 15.0. Satscan was used for geographical spatial clustering of childhood cancer cases based on county centroids using the Poisson clustering algorithm which adjusts for population density. Pictorial maps were created using MapInfo professional version 8.0. ^ Results. One hundred and twenty five counties had no TRI facilities in their region, while 129 facilities had at least one TRI facility. An increasing trend for number of facilities and total disposal was observed except for the highest category based on cancer rate quartiles. Linear regression analysis using log transformation for number of facilities and total disposal in predicting cancer rates was computed, however both these variables were not found to be significant predictors. Seven significant geographical spatial clusters of counties for high childhood cancer rates (p<0.05) were indicated. Binomial logistic regression by categorizing the cancer rate in to two groups (<=150 and >150) indicated an odds ratio of 1.58 (CI 1.127, 2.222) for the natural log of number of facilities. ^ Conclusion. We have used a unique methodology by combining GIS and spatial clustering techniques with existing statistical approaches in examining the association between living in proximity to TRI facilities and the incidence of childhood cancer in the State of Texas. Although a concrete association was not indicated, further studies are required examining specific TRI chemicals. Use of this information can enable the researchers and public to identify potential concerns, gain a better understanding of potential risks, and work with industry and government to reduce toxic chemical use, disposal or other releases and the risks associated with them. TRI data, in conjunction with other information, can be used as a starting point in evaluating exposures and risks. ^