911 resultados para Web-Centric Expert System
Resumo:
eHabitat is a Web Processing Service (WPS) designed to compute the likelihood of finding ecosystems with equal properties. Inputs to the WPS, typically thematic geospatial "layers", can be discovered using standardised catalogues, and the outputs tailored to specific end user needs. Because these layers can range from geophysical data captured through remote sensing to socio-economical indicators, eHabitat is exposed to a broad range of different types and levels of uncertainties. Potentially chained to other services to perform ecological forecasting, for example, eHabitat would be an additional component further propagating uncertainties from a potentially long chain of model services. This integration of complex resources increases the challenges in dealing with uncertainty. For such a system, as envisaged by initiatives such as the "Model Web" from the Group on Earth Observations, to be used for policy or decision making, users must be provided with information on the quality of the outputs since all system components will be subject to uncertainty. UncertWeb will create the Uncertainty-Enabled Model Web by promoting interoperability between data and models with quantified uncertainty, building on existing open, international standards. It is the objective of this paper to illustrate a few key ideas behind UncertWeb using eHabitat to discuss the main types of uncertainties the WPS has to deal with and to present the benefits of the use of the UncertWeb framework.
Resumo:
A felelős vállalatirányítás egyik stratégiai jelentőségű tényezője a vállalati szintű kockázatkezelés, mely napjaink egyik legnagyobb kihívást jelentő területe a vállalatvezetés számára. A hatékony vállalati kockázatkezelés nem valósulhat meg kizárólag az általános, nemzetközi és hazai szakirodalomban megfogalmazott kockázatkezelési alapelvek követése mentén, a kockázatkezelési rendszer kialakítása során figyelembe kell venni mind az iparági, mind az adott vállalatra jellemző sajátosságokat. Mindez különösen fontos egy olyan speciális tevékenységet folytató vállalatnál, mint a villamosenergia-ipari átviteli rendszerirányító társaság (transmission system operator, TSO). A cikkben a magyar villamosenergia-ipari átviteli rendszerirányító társasággal együttműködésben készített kutatás keretében előálló olyan komplex elméleti és gyakorlati keretrendszert mutatnak be a szerzők, mely alapján az átviteli rendszerirányító társaság számára kialakítottak egy új, területenként egységes kockázatkezelési módszertant (fókuszban a kockázatok azonosításának és számszerűsítésének módszertani lépéseivel), mely alkalmas a vállalati szintű kockázati kitettség meghatározására. _______ This study handles one of today’s most challenging areas of enterprise management: the development and introduction of an integrated and efficient risk management system. For companies operating in specific network industries with a dominant market share and a key role in the national economy, such as electricity TSO’s, risk management is of stressed importance. The study introduces an innovative, mathematically and statistically grounded as well as economically reasoned management approach for the identification, individual effect calculation and summation of risk factors. Every building block is customized for the organizational structure and operating environment of the TSO. While the identification phase guarantees all-inclusivity, the calculation phase incorporates expert techniques and Monte Carlo simulation and the summation phase presents an expected combined distribution and value effect of risks on the company’s profit lines based on the previously undiscovered correlations between individual risk factors.
Resumo:
In this paper five different models, as five modules of a complex agro-ecosystem are investigated. The water and nutrient flow in soil is simulated by the nutrient-in-soil model while the biomass change according to the seasonal weather aspects, the nutrient content of soil and the biotic interactions amongst the other terms of the food web are simulated by the food web population dynamical model that is constructed for a piece of homogeneous field. The food web model is based on the nutrient-in-soil model and on the activity function evaluator model that expresses the effect of temperature. The numbers of individuals in all phenological phases of the different populations are given by the phenology model. The food web model is extended to an inhomogeneous piece of field by the spatial extension model. Finally, as an additional module, an application of the above models for multivariate state-planes, is given. The modules built into the system are closely connected to each other as they utilize each other’s outputs, nevertheless, they work separately, too. Some case studies are analysed and a summarized outlook is given.
Resumo:
Database design is a difficult problem for non-expert designers. It is desirable to assist such designers during the problem solving process by means of a knowledge based (KB) system. A number of prototype KB systems have been proposed, however there are many shortcomings. Few have incorporated sufficient expertise in modeling relationships, particularly higher order relationships. There has been no empirical study that experimentally tested the effectiveness of any of these KB tools. Problem solving behavior of non-experts, whom the systems were intended to assist, has not been one of the bases for system design. In this project a consulting system for conceptual database design that addresses the above short comings was developed and empirically validated.^ The system incorporates (a) findings on why non-experts commit errors and (b) heuristics for modeling relationships. Two approaches to knowledge base implementation--system restrictiveness and decisional guidance--were used and compared in this project. The Restrictive approach is proscriptive and limits the designer's choices at various design phases by forcing him/her to follow a specific design path. The Guidance system approach which is less restrictive, provides context specific, informative and suggestive guidance throughout the design process. The main objectives of the study are to evaluate (1) whether the knowledge-based system is more effective than a system without the knowledge-base and (2) which knowledge implementation--restrictive or guidance--strategy is more effective. To evaluate the effectiveness of the knowledge base itself, the two systems were compared with a system that does not incorporate the expertise (Control).^ The experimental procedure involved the student subjects solving a task without using the system (pre-treatment task) and another task using one of the three systems (experimental task). The experimental task scores of those subjects who performed satisfactorily in the pre-treatment task were analyzed. Results are (1) The knowledge based approach to database design support lead to more accurate solutions than the control system; (2) No significant difference between the two KB approaches; (3) Guidance approach led to best performance; and (4) The subjects perceived the Restrictive system easier to use than the Guidance system. ^
Resumo:
Because some Web users will be able to design a template to visualize information from scratch, while other users need to automatically visualize information by changing some parameters, providing different levels of customization of the information is a desirable goal. Our system allows the automatic generation of visualizations given the semantics of the data, and the static or pre-specified visualization by creating an interface language. We address information visualization taking into consideration the Web, where the presentation of the retrieved information is a challenge. ^ We provide a model to narrow the gap between the user's way of expressing queries and database manipulation languages (SQL) without changing the system itself thus improving the query specification process. We develop a Web interface model that is integrated with the HTML language to create a powerful language that facilitates the construction of Web-based database reports. ^ As opposed to other papers, this model offers a new way of exploring databases focusing on providing Web connectivity to databases with minimal or no result buffering, formatting, or extra programming. We describe how to easily connect the database to the Web. In addition, we offer an enhanced way on viewing and exploring the contents of a database, allowing users to customize their views depending on the contents and the structure of the data. Current database front-ends typically attempt to display the database objects in a flat view making it difficult for users to grasp the contents and the structure of their result. Our model narrows the gap between databases and the Web. ^ The overall objective of this research is to construct a model that accesses different databases easily across the net and generates SQL, forms, and reports across all platforms without requiring the developer to code a complex application. This increases the speed of development. In addition, using only the Web browsers, the end-user can retrieve data from databases remotely to make necessary modifications and manipulations of data using the Web formatted forms and reports, independent of the platform, without having to open different applications, or learn to use anything but their Web browser. We introduce a strategic method to generate and construct SQL queries, enabling inexperienced users that are not well exposed to the SQL world to build syntactically and semantically a valid SQL query and to understand the retrieved data. The generated SQL query will be validated against the database schema to ensure harmless and efficient SQL execution. (Abstract shortened by UMI.)^
Resumo:
Methods for accessing data on the Web have been the focus of active research over the past few years. In this thesis we propose a method for representing Web sites as data sources. We designed a Data Extractor data retrieval solution that allows us to define queries to Web sites and process resulting data sets. Data Extractor is being integrated into the MSemODB heterogeneous database management system. With its help database queries can be distributed over both local and Web data sources within MSemODB framework. ^ Data Extractor treats Web sites as data sources, controlling query execution and data retrieval. It works as an intermediary between the applications and the sites. Data Extractor utilizes a twofold “custom wrapper” approach for information retrieval. Wrappers for the majority of sites are easily built using a powerful and expressive scripting language, while complex cases are processed using Java-based wrappers that utilize specially designed library of data retrieval, parsing and Web access routines. In addition to wrapper development we thoroughly investigate issues associated with Web site selection, analysis and processing. ^ Data Extractor is designed to act as a data retrieval server, as well as an embedded data retrieval solution. We also use it to create mobile agents that are shipped over the Internet to the client's computer to perform data retrieval on behalf of the user. This approach allows Data Extractor to distribute and scale well. ^ This study confirms feasibility of building custom wrappers for Web sites. This approach provides accuracy of data retrieval, and power and flexibility in handling of complex cases. ^
Defining the role of floating periphyton mats in shaping food-web dynamics in the Florida Everglades
Resumo:
Expansive periphyton mats are a striking characteristic of the Florida Everglades. Floating periphyton mats are home to a diverse macroinvertebrate community dominated by chironomid and ceratopogonid larvae and amphipods that use the mat as both a food resource and refuge from predation. While this periphyton complex functions as a self-organizing system, it also serves as a base for trophic interactions with larger organisms. The purpose of my research was to quantify variation in the macroinvertebrate community inhabiting floating periphyton mats, describe the role of mats in shaping food-web dynamics, and describe how these trophic interactions change with eutrophication. ^ I characterized the macroinvertebrate community inhabiting periphyton through a wet-season by describing spatial variation on scales from 0.2 m to 3 km. Floating periphyton mats contained a diverse macroinvertebrate community, with greater taxonomic richness and higher densities of many taxa than adjacent microhabitats. Macroinvertebrate density increased through the wet season as periphyton mats developed. While some variation was noted among sites, spatial patterns were not observed on smaller scales. I also sampled ten sites representing gradients of hydroperiod and nutrient (P) levels. The density of macroinvertebrates inhabiting periphyton mats increased with increasing P availability; however, short-hydroperiod P-enriched sites had the highest macroinvertebrate density. This pattern suggests a synergistic interaction of top-down and bottom-up effects. In contrast, macroinvertebrate density was lower in benthic floc, where it was negatively correlated with hydroperiod. ^ I used two types of mesocosms (field cages and tanks) to manipulate large consumers (fish and grass shrimp) with inclusion/exclusion cages over an experimental P gradient. In most cases, periphyton mats served as an effective predation refuge. Macroinvertebrates were consumed more frequently in P-enriched treatments, where mats were also heavily grazed. Macroinvertebrate densities decreased with increasing P in benthic floc, but increased with enrichment in periphyton mats until levels were reached that caused disassociation of the mat. ^ This research documents several indirect trophic interactions that can occur in complex habitats, and emphasizes the need to characterize dynamics of all microhabitats to fully describe the dynamics of an ecosystem. ^
Resumo:
Security remains a top priority for organizations as their information systems continue to be plagued by security breaches. This dissertation developed a unique approach to assess the security risks associated with information systems based on dynamic neural network architecture. The risks that are considered encompass the production computing environment and the client machine environment. The risks are established as metrics that define how susceptible each of the computing environments is to security breaches. ^ The merit of the approach developed in this dissertation is based on the design and implementation of Artificial Neural Networks to assess the risks in the computing and client machine environments. The datasets that were utilized in the implementation and validation of the model were obtained from business organizations using a web survey tool hosted by Microsoft. This site was designed as a host site for anonymous surveys that were devised specifically as part of this dissertation. Microsoft customers can login to the website and submit their responses to the questionnaire. ^ This work asserted that security in information systems is not dependent exclusively on technology but rather on the triumvirate people, process and technology. The questionnaire and consequently the developed neural network architecture accounted for all three key factors that impact information systems security. ^ As part of the study, a methodology on how to develop, train and validate such a predictive model was devised and successfully deployed. This methodology prescribed how to determine the optimal topology, activation function, and associated parameters for this security based scenario. The assessment of the effects of security breaches to the information systems has traditionally been post-mortem whereas this dissertation provided a predictive solution where organizations can determine how susceptible their environments are to security breaches in a proactive way. ^
Resumo:
This research presents several components encompassing the scope of the objective of Data Partitioning and Replication Management in Distributed GIS Database. Modern Geographic Information Systems (GIS) databases are often large and complicated. Therefore data partitioning and replication management problems need to be addresses in development of an efficient and scalable solution. ^ Part of the research is to study the patterns of geographical raster data processing and to propose the algorithms to improve availability of such data. These algorithms and approaches are targeting granularity of geographic data objects as well as data partitioning in geographic databases to achieve high data availability and Quality of Service(QoS) considering distributed data delivery and processing. To achieve this goal a dynamic, real-time approach for mosaicking digital images of different temporal and spatial characteristics into tiles is proposed. This dynamic approach reuses digital images upon demand and generates mosaicked tiles only for the required region according to user's requirements such as resolution, temporal range, and target bands to reduce redundancy in storage and to utilize available computing and storage resources more efficiently. ^ Another part of the research pursued methods for efficient acquiring of GIS data from external heterogeneous databases and Web services as well as end-user GIS data delivery enhancements, automation and 3D virtual reality presentation. ^ There are vast numbers of computing, network, and storage resources idling or not fully utilized available on the Internet. Proposed "Crawling Distributed Operating System "(CDOS) approach employs such resources and creates benefits for the hosts that lend their CPU, network, and storage resources to be used in GIS database context. ^ The results of this dissertation demonstrate effective ways to develop a highly scalable GIS database. The approach developed in this dissertation has resulted in creation of TerraFly GIS database that is used by US government, researchers, and general public to facilitate Web access to remotely-sensed imagery and GIS vector information. ^
Resumo:
Today, the development of domain-specific communication applications is both time-consuming and error-prone because the low-level communication services provided by the existing systems and networks are primitive and often heterogeneous. Multimedia communication applications are typically built on top of low-level network abstractions such as TCP/UDP socket, SIP (Session Initiation Protocol) and RTP (Real-time Transport Protocol) APIs. The User-centric Communication Middleware (UCM) is proposed to encapsulate the networking complexity and heterogeneity of basic multimedia and multi-party communication for upper-layer communication applications. And UCM provides a unified user-centric communication service to diverse communication applications ranging from a simple phone call and video conferencing to specialized communication applications like disaster management and telemedicine. It makes it easier to the development of domain-specific communication applications. The UCM abstraction and API is proposed to achieve these goals. The dissertation also tries to integrate the formal method into UCM development process. The formal model is created for UCM using SAM methodology. Some design errors are found during model creation because the formal method forces to give the precise description of UCM. By using the SAM tool, formal UCM model is translated to Promela formula model. In the dissertation, some system properties are defined as temporal logic formulas. These temporal logic formulas are manually translated to promela formulas which are individually integrated with promela formula model of UCM and verified using SPIN tool. Formal analysis used here helps verify the system properties (for example multiparty multimedia protocol) and dig out the bugs of systems.
Resumo:
Purpose. The Internet has provided an unprecedented opportunity for psychotropic medication consumers, a traditionally silenced group in clinical trial research, to have voice by contributing to the construction of drug knowledge in an immediate, direct manner. Currently, there are no systematic appraisals of the potential of online consumer drug reviews to contribute to drug knowledge. The purpose of this research was to explore the content of drug information on various websites representing themselves as consumer- and expert-constructed, and as a practical consideration, to examine how each source may help and hinder treatment decision-making.^ Methodology. A mixed-methods research strategy utilizing a grounded theory approach was used to analyze drug information on 5 exemplar websites (3 consumer- and 2 expertconstructed) for 2 popularly prescribed psychotropic drugs (escitalopram and quetiapine). A stratified simple random sample was used to select 1,080 consumer reviews from the websites (N=7,114) through February 2009. Text was coded using QDA Miner 3.2 software by Provalis Research. A combination of frequency tables, descriptive excerpts from text, and chi-square tests for association were used throughout analyses.^ Findings. The most frequently mentioned effects by consumers taking either drug were related to psychological/behavioral symptoms and sleep. Consumers reported many of the same effects as found on expert health sites, but provided more descriptive language and situational examples. Expert labels of less serious on certain effects were not congruent with the sometimes tremendous burden described by consumers. Consumers mentioned more than double the themes mentioned in expert text, and demonstrated a diversity and range of discourses around those themes.^ Conclusions. Drug effects from each source were complete relative to the information provided in the other, but each also offered distinct advantages. Expert health sites provided concise summaries of medications’ effects, while consumer reviews had the added advantage of concrete descriptions and greater context. In short, consumer reviews better prepared potential consumers for what it’s like to take psychotropic drugs. Both sources of information benefit clinicians and consumers in making informed treatment-related decisions. Social work practitioners are encouraged to thoughtfully utilize online consumer drug reviews as a legitimate additional source for assisting clients in learning about treatment options.^
Resumo:
Enterprise Resource Planning (ERP) systems are software programs designed to integrate the functional requirements, and operational information needs of a business. Pressures of competition and entry standards for participation in major manufacturing supply chains are creating greater demand for small business ERP systems. The proliferation of new offerings of ERP systems introduces complexity to the selection process to identify the right ERP business software for a small and medium-sized enterprise (SME). The selection of an ERP system is a process in which a faulty conclusion poses a significant risk of failure to SME’s. The literature reveals that there are still very high failure rates in ERP implementation, and that faulty selection processes contribute to this failure rate. However, the literature is devoid of a systematic methodology for the selection process for an ERP system by SME’s. This study provides a methodological approach to selecting the right ERP system for a small or medium-sized enterprise. The study employs Thomann’s meta-methodology for methodology development; a survey of SME’s is conducted to inform the development of the methodology, and a case study is employed to test, and revise the new methodology. The study shows that a rigorously developed, effective methodology that includes benchmarking experiences has been developed and successfully employed. It is verified that the methodology may be applied to the domain of users it was developed to serve, and that the test results are validated by expert users and stakeholders. Future research should investigate in greater detail the application of meta-methodologies to supplier selection and evaluation processes for services and software; additional research into the purchasing practices of small firms is clearly needed.^
Resumo:
With the exponential increasing demands and uses of GIS data visualization system, such as urban planning, environment and climate change monitoring, weather simulation, hydrographic gauge and so forth, the geospatial vector and raster data visualization research, application and technology has become prevalent. However, we observe that current web GIS techniques are merely suitable for static vector and raster data where no dynamic overlaying layers. While it is desirable to enable visual explorations of large-scale dynamic vector and raster geospatial data in a web environment, improving the performance between backend datasets and the vector and raster applications remains a challenging technical issue. This dissertation is to implement these challenging and unimplemented areas: how to provide a large-scale dynamic vector and raster data visualization service with dynamic overlaying layers accessible from various client devices through a standard web browser, and how to make the large-scale dynamic vector and raster data visualization service as rapid as the static one. To accomplish these, a large-scale dynamic vector and raster data visualization geographic information system based on parallel map tiling and a comprehensive performance improvement solution are proposed, designed and implemented. They include: the quadtree-based indexing and parallel map tiling, the Legend String, the vector data visualization with dynamic layers overlaying, the vector data time series visualization, the algorithm of vector data rendering, the algorithm of raster data re-projection, the algorithm for elimination of superfluous level of detail, the algorithm for vector data gridding and re-grouping and the cluster servers side vector and raster data caching.
Resumo:
Effective interaction with personal computers is a basic requirement for many of the functions that are performed in our daily lives. With the rapid emergence of the Internet and the World Wide Web, computers have become one of the premier means of communication in our society. Unfortunately, these advances have not become equally accessible to physically handicapped individuals. In reality, a significant number of individuals with severe motor disabilities, due to a variety of causes such as Spinal Cord Injury (SCI), Amyothrophic Lateral Sclerosis (ALS), etc., may not be able to utilize the computer mouse as a vital input device for computer interaction. The purpose of this research was to further develop and improve an existing alternative input device for computer cursor control to be used by individuals with severe motor disabilities. This thesis describes the development and the underlying principle for a practical hands-off human-computer interface based on Electromyogram (EMG) signals and Eye Gaze Tracking (EGT) technology compatible with the Microsoft Windows operating system (OS). Results of the software developed in this thesis show a significant improvement in the performance and usability of the EMG/EGT cursor control HCI.
Resumo:
The outcome of this research is an Intelligent Retrieval System for Conditions of Contract Documents. The objective of the research is to improve the method of retrieving data from a computer version of a construction Conditions of Contract document. SmartDoc, a prototype computer system has been developed for this purpose. The system provides recommendations to aid the user in the process of retrieving clauses from the construction Conditions of Contract document. The prototype system integrates two computer technologies: hypermedia and expert systems. Hypermedia is utilized to provide a dynamic way for retrieving data from the document. Expert systems technology is utilized to build a set of rules that activate the recommendations to aid the user during the process of retrieval of clauses. The rules are based on experts knowledge. The prototype system helps the user retrieve related clauses that are not explicitly cross-referenced but, according to expert experience, are relevant to the topic that the user is interested in.