911 resultados para Web-Centric Expert System


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: We propose and validate a computer aided system to measure three different mandibular indexes: cortical width, panoramic mandibular index and, mandibular alveolar bone resorption index. Study Design: Repeatability and reproducibility of the measurements are analyzed and compared to the manual estimation of the same indexes. Results: The proposed computerized system exhibits superior repeatability and reproducibility rates compared to standard manual methods. Moreover, the time required to perform the measurements using the proposed method is negligible compared to perform the measurements manually. Conclusions: We have proposed a very user friendly computerized method to measure three different morphometric mandibular indexes. From the results we can conclude that the system provides a practical manner to perform these measurements. It does not require an expert examiner and does not take more than 16 seconds per analysis. Thus, it may be suitable to diagnose osteoporosis using dental panoramic radiographs

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJETIVO: Integração dos Sistemas de Informação em Radiologia (RIS - "Radiology Information System") e de Arquivamento e Comunicação de Imagens (PACS - "Picture Archiving and Communication System") no Serviço de Radiodiagnóstico do Hospital das Clínicas da Faculdade de Medicina de Ribeirão Preto da Universidade de São Paulo, para possibilitar a consulta remota de laudos e imagens associadas. MATERIAIS E MÉTODOS: A integração RIS/PACS implementada é feita em tempo real, no momento da consulta, utilizando tecnologias "web" e técnicas de programação para "intranet/internet". RESULTADOS: A aplicação "web" permite a consulta pela "intranet" do hospital dos laudos de exames e imagens associadas através de nome, sobrenome, número de registro hospitalar dos pacientes ou por modalidade, dentro de um determinado período. O visualizador possibilita que o usuário navegue pelas imagens, podendo realizar algumas funções básicas como "zoom", controle de brilho e contraste e visualização de imagens lado a lado. CONCLUSÃO: A integração RIS/PACS diminui o risco de inconsistências, através da redução do número de interfaces entre bases de dados com grande redundância de informação, proporcionando um ambiente de trabalho rápido e seguro para consulta de laudos radiológicos e visualização de imagens associadas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today’s commercial web sites are under heavy user load and they are expected to be operational and available at all times. Distributed system architectures have been developed to provide a scalable and failure tolerant high availability platform for these web based services. The focus on this thesis was to specify and implement resilient and scalable locally distributed high availability system architecture for a web based service. Theory part concentrates on the fundamental characteristics of distributed systems and presents common scalable high availability server architectures that are used in web based services. In the practical part of the thesis the implemented new system architecture is explained. Practical part also includes two different test cases that were done to test the system's performance capacity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the current state and development of a prototype web-GIS (Geographic Information System) decision support platform intended for application in natural hazards and risk management, mainly for floods and landslides. This web platform uses open-source geospatial software and technologies, particularly the Boundless (formerly OpenGeo) framework and its client side software development kit (SDK). The main purpose of the platform is to assist the experts and stakeholders in the decision-making process for evaluation and selection of different risk management strategies through an interactive participation approach, integrating web-GIS interface with decision support tool based on a compromise programming approach. The access rights and functionality of the platform are varied depending on the roles and responsibilities of stakeholders in managing the risk. The application of the prototype platform is demonstrated based on an example case study site: Malborghetto Valbruna municipality of North-Eastern Italy where flash floods and landslides are frequent with major events having occurred in 2003. The preliminary feedback collected from the stakeholders in the region is discussed to understand the perspectives of stakeholders on the proposed prototype platform.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this thesis is to provide a business model framework that connects customer value to firm resources and explains the change logic of the business model. Strategic supply management and especially dynamic value network management as its scope, the dissertation is based on basic economic theories, transaction cost economics and the resource-based view. The main research question is how the changing customer values should be taken into account when planning business in a networked environment. The main question is divided into questions that form the basic research problems for the separate case studies presented in the five Publications. This research adopts the case study strategy, and the constructive research approach within it. The material consists of data from several Delphi panels and expert workshops, software pilot documents, company financial statements and information on investor relations on the companies’ web sites. The cases used in this study are a mobile multi-player game value network, smart phone and “Skype mobile” services, the business models of AOL, eBay, Google, Amazon and a telecom operator, a virtual city portal business system and a multi-play offering. The main contribution of this dissertation is bridging the gap between firm resources and customer value. This has been done by theorizing the business model concept and connecting it to both the resource-based view and customer value. This thesis contributes to the resource-based view, which deals with customer value and firm resources needed to deliver the value but has a gap in explaining how the customer value changes should be connected to the changes in key resources. This dissertation also provides tools and processes for analyzing the customer value preferences of ICT services, constructing and analyzing business models and business concept innovation and conducting resource analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a prototype of an interactive web-GIS tool for risk analysis of natural hazards, in particular for floods and landslides, based on open-source geospatial software and technologies. The aim of the presented tool is to assist the experts (risk managers) in analysing the impacts and consequences of a certain hazard event in a considered region, providing an essential input to the decision-making process in the selection of risk management strategies by responsible authorities and decision makers. This tool is based on the Boundless (OpenGeo Suite) framework and its client-side environment for prototype development, and it is one of the main modules of a web-based collaborative decision support platform in risk management. Within this platform, the users can import necessary maps and information to analyse areas at risk. Based on provided information and parameters, loss scenarios (amount of damages and number of fatalities) of a hazard event are generated on the fly and visualized interactively within the web-GIS interface of the platform. The annualized risk is calculated based on the combination of resultant loss scenarios with different return periods of the hazard event. The application of this developed prototype is demonstrated using a regional data set from one of the case study sites, Fella River of northeastern Italy, of the Marie Curie ITN CHANGES project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El projecte té per objecte explorar les possibilitats dels sistemes d'informació geogràfica en els estudis de vigilància epidemiològica (VE).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L'objectiu d'aquest projecte és millorar la situació actual del servei tècnic de reparació i manteniment d'equipament mèdic de l'Hospital Verge de la Cinta i dels centres d'atenció primària de Tortosa i crear unes interfícies clares, ordenades, fàcils i intuïtives per als usuaris, ja siguin els clients o els propis empleats, i que serviran de pont de comunicació amb el servidor central de l'organització on estarà tota la seva informació. Per crear les interfícies s'utilitzarà la metodologia del disseny centrat en l'usuari (DCU) i, a través d'un prototip, s'avaluaran les tasques que han de realitzar els usuaris (clients i empleats).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Amb la situació econòmica actual pot ser interessant poder vendre objectes que ja no s’utilitzen i també poder-ne comprar de segona mà. Amb aquesta idea sorgeix el projecte de crear una pàgina de subhastes online on la gent pugui comerciar amb les coses que ja no necessita. Tenint en compte el concepte inicial, el propietari de la pàgina no rebrà cap retribució ni percentatge de cada subhasta, tot l’import serà pel venedor. L’objectiu principal és el de poder oferir un lloc on després de registrar-se, els usuaris puguin veure i pujar per els articles que altres persones estan subhastant i també la possibilitat de crear les seves pròpies subhastes. Cada usuari disposarà d’un espai personal on veure les subhastes amb les que ha interactuat i així no perdre-les de vista i també on poder veure en cada moment l’estat de les subhastes que ha creat. La vista d’una subhasta s’actualitzarà automàticament sense haver de recarregar la pàgina i si algú puja durant l’últim minut la subhasta s’allargarà un minut més per evitar puges a l’últim moment i així maximitzar el preu final. Hi haurà un administrador que serà l’encarregat de gestionar el bon funcionament de la pàgina amb permís per afegir, editar, consultar i eliminar tota la informació disponible. Per portar a terme el projecte s’ha utilitzat PHP per la part de programació i MySQL com a sistema gestor de bases de dades.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En aquest món on ens ha tocat viure i patir canvis tan durs amb la crisi econòmica que patim, que ens ha fet passar de lligar els gossos amb llonganisses a vigilar en les despeses del dia a dia per poder arribar just a final de mes, és el moment de reinventar-se. És per aquest motiu que presento aquesta idea, on el seu objectiu és desenvolupar una pàgina web que esdevingui un punt de trobada entre usuaris que volen transmetre o ampliar el seu coneixement i oferir-los la possibilitat que entre ells puguin compartir les seves habilitats i destreses. El web consistirà en un panell d’activitats on els usuaris un cop s’hagin registrat puguin crear les activitats que vulguin aprendre o bé ensenyar, tot demanant, si ho desitgen, quelcom a canvi. Aleshores la resta d’usuaris si els interessa l’activitat, poden acceptar la demanda o bé fer una proposta pròpia. A partir d’aquí els usuaris s’han de posar d’acord a l’hora de dur a terme l’activitat. El web disposarà d’una part pels usuaris amb permisos d’administrador perquè puguin gestionar el portal. Aquest projecte s’ha desenvolupat amb el framework de PHP Codeigniter, el qual utilitza la programació per capes MVC, la qual separa la programació en tres parts: el Model, la Vista i el Controlador. També s’han utilitzat els llenguatges HTML5 i CSS3, i jQuery, que és una llibreria de JavaScript. Com a sistema gestor de base de dades s’ha utilitzat el MySQL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Web application performance testing is an emerging and important field of software engineering. As web applications become more commonplace and complex, the need for performance testing will only increase. This paper discusses common concepts, practices and tools that lie at the heart of web application performance testing. A pragmatic, hands-on approach is assumed where applicable; real-life examples of test tooling, execution and analysis are presented right next to the underpinning theory. At the client-side, web application performance is primarily driven by the amount of data transmitted over the wire. At the server-side, selection of programming language and platform, implementation complexity and configuration are the primary contributors to web application performance. Web application performance testing is an activity that requires delicate coordination between project stakeholders, developers, system administrators and testers in order to produce reliable and useful results. Proper test definition, execution, reporting and repeatable test results are of utmost importance. Open-source performance analysis tools such as Apache JMeter, Firebug and YSlow can be used to realise effective web application performance tests. A sample case study using these tools is presented in this paper. The sample application was found to perform poorly even under the moderate load incurred by the sample tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research focus of this thesis is to explore options for building systems for business critical web applications. Business criticality here includes requirements for data protection and system availability. The focus is on open source software. Goals are to identify robust technologies and engineering practices to implement such systems. Research methods include experiments made with sample systems built around chosen software packages that represent certain technologies. The main research focused on finding a good method for database data replication, a key functionality for high-availability, database-driven web applications. Research included also finding engineering best practices from books written by administrators of high traffic web applications. Experiment with database replication showed, that block level synchronous replication offered by DRBD replication software offered considerably more robust data protection and high-availability functionality compared to leading open source database product MySQL, and its built-in asynchronous replication. For master-master database setups, block level replication is more recommended way to build high-availability into the system. Based on thesis research, building high-availability web applications is possible using a combination of open source software and engineering best practices for data protection, availability planning and scaling.