982 resultados para Sistemi Web, database


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lo scopo di questo lavoro è realizzare, in ambito web, un'applicazione client-server database-independent, ovvero un'applicazione il cui funzionamento non è vincolato da uno specifico tipo di base di dati.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Questa tesi è il risultato dell’attività di sviluppo di una applicazione Web per l’analisi e la presentazione di dati relativi al mercato immobiliare italiano, svolta presso l’azienda responsabile del portale immobiliare all’indirizzo www.affitto.it. L'azienda commissiona lo sviluppo di un sistema software che generi uno storico e descriva l'andamento del mercato immobiliare nazionale. In questa tesi verrà presentato il processo di sviluppo software che ha portato alla realizzazione del prodotto, che è costituito da un applicativo Web-based implementato col supporto di tecnologie quali PHP,HTML,MySQL,CSS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pinus pinaster is an economically and ecologically important species that is becoming a woody gymnosperm model. Its enormous genome size makes whole-genome sequencing approaches are hard to apply. Therefore, the expressed portion of the genome has to be characterised and the results and annotations have to be stored in dedicated databases.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Food webs have been used in order to understand the trophic relationship among organisms within an ecosystem, however the extension by which sampling efficiency could affect food web responses remain poorly understood. Still, there is a lack of long-term sampling data for many insect groups, mainly related to the interactions between herbivores and their host plants. In the first chapter, I describe a source food web based on the Senegalia tenuifolia plant by identifying the associated insect species and the interactions among them and with this host plant. Furthermore, I check for the data robustness from each trophic level and propose a cost-efficiently methodology. The results from this chapter show that the collected dataset and the methodology presented are a good tool for sample most insect richness of a source food web. In total the food web comprises 27 species belonging to four trophic levels. In the second chapter, I demonstrate the temporal variation in the species richness and abundance from each trophic level, as well as the relationship among distinct trophic levels. Moreover, I investigate the diversity patterns of the second and third trophic level by assessing the contribution of alfa and beta-diversity components along the years. This chapter shows that in our system the parasitoid abundance is regulated by the herbivore abundances. Besides, the species richness and abundances of the trophic levels vary temporally. It also shows that alfa-diversity was the diversity component that most contribute to the herbivore species diversity (2nd trophic level), while the contribution of alfa- and beta-diversity changed along the years for parasitoid diversity (3rd level). Overall, this dissertation describes a source food web and bring insights into some food web challenges related to the sampling effort to gather enough species from all trophic levels. It also discuss the relation among communities associated with distinct trophic levels and their temporal variation and diversity patterns. Finally, this dissertation contributes for the world food web database and in understanding the interactions among its trophic levels and each trophic level pattern along time and space

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJETIVO: Identificar áreas de vulnerabilidade para os casos novos de co-infecção HIV/tuberculose (TB). MÉTODOS: Estudo descritivo ecológico realizado por meio do georreferenciamento dos casos novos de HIV/TB notificados em Ribeirão Preto, SP, em 2006. Os dados foram obtidos do sistema de informação estadual paulista de notificação de TB. Os casos novos de co-infecção HIV/TB foram analisados conforme características sociodemográficas e clínicas e, posteriormente, georreferenciados na base cartográfica do município segundo endereço residencial. Os setores do município foram categorizados em três níveis socioeconômicos: inferior, intermediário e superior, com base na análise de componentes principais das variáveis do censo demográfico de 2000 (renda, instrução e percentagem de domicílios com cinco ou mais moradores). Foi calculada a incidência da co-infecção HIV/TB para cada nível socioeconômico. RESULTADOS: A co-infecção HIV/TB acometeu mais adultos do sexo masculino em idade economicamente ativa e a forma pulmonar da TB foi a mais comum. A distribuição espacial mostrou que as incidências nas áreas com níveis socioeconômicos intermediários e inferiores (8,3 e 11,5 casos por 100 mil habitantes, respectivamente) foram superiores àquela (4,8 casos por 100 mil habitantes) de nível socioeconômico superior. CONCLUSÕES: A taxa de incidência de co-infecção HIV/TB analisada por níveis socioeconômicos mostrou padraÞo espacial de distribuiçaÞo não homogêneo e apresentou valores mais altos em áreas de maior vulnerabilidade social. O estudo diagnosticou aìreas geograìficas prioritaìrias para o controle da co-infecção e a tecnologia do sistema de informação geográfica pode ser empregada no planejamento das ações em saúde pelos gestores municipais.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aquest projecte es centre en el disseny d’ un programari creador iconsultor de genealogia familiar. Es tracta d'un programari per a ús individual, no col.lectiu, i sense ànimde lucre, on les dades més importants són les dels familiars. Qualsevol usuari pot crear diversesfamílies (en un mateix sistema) i associar-hi familiars. Es poden introduir molts tipus de dadespersonals, a més de poder vincular-hi els arxius i events que es desitgin. Un event és unaconteixament en la vida d'una o diverses persones. D'aquesta manera l'usuari pot agregar moltstipus d'esdeveniments diferents als familiars i també hi ha la opció, que diferencia l'aplicació demoltes altres ja existents, d'agregar esdeveniments als events. També cal destacar que a cadafamiliar, arxiu i event pot associar-se un lloc (poblacio, regió i estat) i d'aquesta forma la informaciótotal que es pot generar, encara és encara més completa. La possibilitat de crear llocs nous és forçaàmplia. Finalment l'usuari podrà associar els diferents familiars segons parentesc i d'aquesta formaobtenir un arbre genealògic de tota la família. Un dels aspectes més treballats a l'aplicació, ha estat que totes les dades anteriorment esmentades, poguessin ser visualitzades fàcil i ràpidament per l'usuari final. La navegació entrepantalles i les diferents opcions, han estat pensades i dissenyades, perquè l'usuari final no tingui cap dificultat, sigui quin sigui el seu nivell. D'aquesta manera, a més de la gestió possible de dades de familiars, arxius, events i llocs; la utilització de le mateixes i la creació de multitud de vistes i cerques diferents, ajuden a que el programari sigui força complet

Relevância:

80.00% 80.00%

Publicador:

Resumo:

QUESTION UNDER STUDY: The aim of this study was to assess the prevalence of chronic kidney disease (CKD) among type 2 diabetic patients in primary care settings in Switzerland, and to analyse the prescription of antidiabetic drugs in CKD according to the prevailing recommendations. METHODS: In this cross-sectional study, each participating physician was asked to introduce anonymously in a web database the data from up to 15 consecutive diabetic patients attending her/his office between December 2013 and June 2014. Demographic, clinical and biochemical data were analysed. CKD was classified with the KDIGO nomenclature based on estimated glomerular filtration rate (eGFR) and urinary albumin/creatinine ratio. RESULTS: A total of 1 359 patients (mean age 66.5 ± 12.4 years) were included by 109 primary care physicians. CKD stages 3a, 3b and 4 were present in 13.9%, 6.1%, and 2.4% of patients, respectively. Only 30.6% of patients had an entry for urinary albumin/creatinine ratio. Among them, 35.6% were in CKD stage A2, and 4.1% in stage A3. Despite prevailing limitations, metformin and sulfonylureas were prescribed in 53.9% and 16.5%, respectively, of patients with advanced CKD (eGFR <30 ml/min). More than a third of patients were on a dipeptidyl-peptidase-4 inhibitor across all CKD stages. Insulin use increased progressively from 26.8% in CKD stage 1-2 to 50% in stage 4. CONCLUSIONS: CKD is frequent in patients with type 2 diabetes attending Swiss primary care practices, with CKD stage 3 and 4 affecting 22.4% of cases. This emphasizes the importance of routine screening of diabetic nephropathy based on both eGFR and urinary albumin/creatinine ratio, the latter being largely underused by primary care physicians. A careful individual drug risk/benefit balance assessment is mandatory to avoid the frequently observed inappropriate prescription of antidiabetic drugs in CKD patients.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

After Action Reports for Hurricane Isaac & Sandy concluded that WebEOC was correct choice for FEMA’s Crisis Management System: real time data easily shared between FEMA Headquarters, Regions and Incident Management Assistance Teams; cloud capability allowed use on any web connected device, laptop, tablet, iPad, smart phone; intuitive System - Offgoing personnel able to train incoming reliefs on new features or changes within minutes; widespread use of WebEOC through out country in 19 other Federal Departments and Agencies, 40 States, hundreds of cities/counties and industry provided a number of users that had prior experience using WebEOC and reduced learning curve experienced when new systems are introduced; focusing on a single shared web database reduced creation of new single purpose databases, spreadsheets and share point sites allowing best practices to be captured, refined, shared and continued

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aquest projecte es centre en el disseny d’ un programari creador i consultor de genealogia familiar. Es tracta d'un programari per a ús individual, no col.lectiu, i sense ànim de lucre, on les dades més importants són les dels familiars. Qualsevol usuari pot crear diverses famílies (en un mateix sistema) i associar-hi familiars. Es poden introduir molts tipus de dades personals, a més de poder vincular-hi els arxius i events que es desitgin. Un event és un aconteixament en la vida d'una o diverses persones. D'aquesta manera l'usuari pot agregar molts tipus d'esdeveniments diferents als familiars i també hi ha la opció, que diferencia l'aplicació de moltes altres ja existents, d'agregar esdeveniments als events. També cal destacar que a cada familiar, arxiu i event pot associar-se un lloc (poblacio, regió i estat) i d'aquesta forma la informació total que es pot generar, encara és encara més completa. La possibilitat de crear llocs nous és força àmplia. Finalment l'usuari podrà associar els diferents familiars segons parentesc i d'aquesta forma obtenir un arbre genealògic de tota la família. Un dels aspectes més treballats a l'aplicació, ha estat que totes les dades anteriorment esmentades, poguessin ser visualitzades fàcil i ràpidament per l'usuari final. La navegació entre pantalles i les diferents opcions, han estat pensades i dissenyades, perquè l'usuari final no tingui cap dificultat, sigui quin sigui el seu nivell. D'aquesta manera, a més de la gestió possible de dades de familiars, arxius, events i llocs; la utilització de le mateixes i la creació de multitud de vistes i cerques diferents, ajuden a que el programari sigui força complet

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In questo elaborato prenderemo in esame la questione della progettazione di un sistema software atto a gestire alcuni dei problemi legati alla raccolta dei dati in ambito medico. Da tempo infatti si è capita l'importanza di una speciale tecnica di raccolta dei dati clinici, nota in letteratura col nome di "patient-reported outcome", che prevede che siano i pazienti stessi a fornire le informazioni circa l'andamento di una cura, di un test clinico o, più semplicemente, informazioni sul loro stato di salute fisica o mentale. Vedremo in questa trattazione come ciò sia possibile e, soprattutto, come le tecniche e le tecnologie informatiche possano dare un grande contributo ai problemi di questo ambito. Mostreremo non solo come sia conveniente l'uso, in campo clinico, di tecniche automatiche di raccolta dei dati, della loro manipolazione, aggregazione e condivisione, ma anche come sia possibile realizzare un sistema moderno che risolva tutti questi problemi attraverso l'utilizzo di tecnologie esistenti, tecniche di modellazione dei dati strutturati e un approccio che, mediante un processo di generalizzazione, aiuti a semplificare lo sviluppo del software stesso.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Advisory Committee on Immunization Practices (ACIP) develops written recommendations for the routine administration of vaccines to children and adults in the U.S. civilian population. The ACIP is the only entity in the federal government that makes such recommendations. ACIP elaborates on selection of its members and rules out concerns regarding its integrity, but fails to provide information about the importance of economic analysis in vaccine selection. ACIP recommendations can have large health and economic consequences. Emphasis on economic evaluation in health is a likely response to severe pressures of the federal and state health budget. This study describes the economic aspects considered by the ACIP while sanctioning a vaccine, and reviews the economic evaluations (our economic data) provided for vaccine deliberations. A five year study period from 2004 to 2009 is adopted. Publicly available data from ACIP web database is used. Drummond et al. (2005) checklist serves as a guide to assess the quality of economic evaluations presented. Drummond et al.'s checklist is a comprehensive hence it is unrealistic to expect every ACIP deliberation to meet all of their criteria. For practical purposes we have selected seven criteria that we judge to be significant criteria provided by Drummond et al. Twenty-four data points were obtained in a five year period. Our results show that out of the total twenty-four data point‘s (economic evaluations) only five data points received a score of six; that is six items on the list of seven were met. None of the data points received a perfect score of seven. Seven of the twenty-four data points received a score of five. A minimum of a two score was received by only one of the economic analyses. The type of economic evaluation along with the model criteria and ICER/QALY criteria met at 0.875 (87.5%). These three criteria were met at the highest rate among the seven criteria studied. Our study findings demonstrate that the perspective criteria met at 0.583 (58.3%) followed by source and sensitivity analysis criteria both tied at 0.541 (54.1%). The discount factor was met at 0.250 (25.0%).^ Economic analysis is not a novel concept to the ACIP. It has been practiced and presented at these meetings on a regular basis for more than five years. ACIP‘s stated goal is to utilize good quality epidemiologic, clinical and economic analyses to help policy makers choose among alternatives presented and thus achieve a better informed decision. As seen in our study the economic analyses over the years are inconsistent. The large variability coupled with lack of a standardized format may compromise the utility of the economic information for decision-making. While making recommendations, the ACIP takes into account all available information about a vaccine. Thus it is vital that standardized high quality economic information is provided at the ACIP meetings. Our study may provide a call for the ACIP to further investigate deficiencies within the system and thereby to improve economic evaluation data presented. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With the growth in new technologies, using online tools have become an everyday lifestyle. It has a greater impact on researchers as the data obtained from various experiments needs to be analyzed and knowledge of programming has become mandatory even for pure biologists. Hence, VTT came up with a new tool, R Executables (REX) which is a web application designed to provide a graphical interface for biological data functions like Image analysis, Gene expression data analysis, plotting, disease and control studies etc., which employs R functions to provide results. REX provides a user interactive application for the biologists to directly enter the values and run the required analysis with a single click. The program processes the given data in the background and prints results rapidly. Due to growth of data and load on server, the interface has gained problems concerning time consumption, poor GUI, data storage issues, security, minimal user interactive experience and crashes with large amount of data. This thesis handles the methods by which these problems were resolved and made REX a better application for the future. The old REX was developed using Python Django and now, a new programming language, Vaadin has been implemented. Vaadin is a Java framework for developing web applications and the programming language is extremely similar to Java with new rich components. Vaadin provides better security, better speed, good and interactive interface. In this thesis, subset functionalities of REX was selected which includes IST bulk plotting and image segmentation and implemented those using Vaadin. A code of 662 lines was programmed by me which included Vaadin as the front-end handler while R language was used for back-end data retrieval, computing and plotting. The application is optimized to allow further functionalities to be migrated with ease from old REX. Future development is focused on including Hight throughput screening functions along with gene expression database handling

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A mapping between chains in the Protein Databank and Enzyme Classification numbers is invaluable for research into structure-function relationships. Mapping at the chain level is a non-trivial problem and we present an automatically updated Web-server, which provides this link in a queryable form and as a downloadable XML or flat file.