977 resultados para GUI legacy Windows Form web-application


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Winter maintenance, particularly snow removal and the stress of snow removal materials on public structures, is an enormous budgetary burden on municipalities and nongovernmental maintenance organizations in cold climates. Lately, geospatial technologies such as remote sensing, geographic information systems (GIS), and decision support tools are roviding a valuable tool for planning snow removal operations. A few researchers recently used geospatial technologies to develop winter maintenance tools. However, most of these winter maintenance tools, while having the potential to address some of these information needs, are not typically placed in the hands of planners and other interested stakeholders. Most tools are not constructed with a nontechnical user in mind and lack an easyto-use, easily understood interface. A major goal of this project was to implement a web-based Winter Maintenance Decision Support System (WMDSS) that enhances the capacity of stakeholders (city/county planners, resource managers, transportation personnel, citizens, and policy makers) to evaluate different procedures for managing snow removal assets optimally. This was accomplished by integrating geospatial analytical techniques (GIS and remote sensing), the existing snow removal asset management system, and webbased spatial decision support systems. The web-based system was implemented using the ESRI ArcIMS ActiveX Connector and related web technologies, such as Active Server Pages, JavaScript, HTML, and XML. The expert knowledge on snow removal procedures is gathered and integrated into the system in the form of encoded business rules using Visual Rule Studio. The system developed not only manages the resources but also provides expert advice to assist complex decision making, such as routing, optimal resource allocation, and monitoring live weather information. This system was developed in collaboration with Black Hawk County, IA, the city of Columbia, MO, and the Iowa Department of transportation. This product was also demonstrated for these agencies to improve the usability and applicability of the system.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Report on a review of selected general and application controls over the Iowa Public Employees’ Retirement System (IPERS) Legacy and I-Que Pension Administration Systems for the period May 16, 2011 through June 16, 2011

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Unless specifically exempted, a certificate of registration is required to operate an aircraft in Iowa (in addition to being registered with the FAA). Aircraft registration laws are defined in Iowa Code Chapter 328. A general summary follows: Iowa residents and businesses must register an aircraft unless it is continuously located and operated beyond the boundaries of the state. Nonresident owners of aircraft providing the intrastate transportation of persons or property for compensation, the furnishing of services for compensation, or intrastate transportation of merchandise in Iowa, must register aircraft with the Iowa DOT prior to conducting those operations. Other visitors are exempt from registering aircraft in Iowa as long as their aircraft are not operated or controlled in the state for more than 30 days a year. Annual registration fees are based on aircraft age, original manufactured list price, and its type of use (personal or business). A one-time six percent use tax on the purchase price of the aircraft is collected at the time of registration. Aircraft registration fees (and aviation fuel taxes) are deposited into a State Aviation Fund to help fund aviation programs in Iowa such as airport development projects, the automated weather observing system (AWOS), runway markings, and windsocks

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Tietojärjestelmien integraatio on nykypäivänä tärkeä osa alue yritysten toiminnassa ja kilpailukyvyn ylläpitämisessä. Palvelukeskeinen arkkitehtuuri ja Web palvelut on uusi joustava tapa tehdä tietojärjestelmien välinen integraatio. Web palveluiden yksi ydinkomponentti on UDDI, Universal Description, Discovery and Integration. UDDI toimii palvelurekisterin tavoin. UDDI määrittää tavan julkaista, löytää ja ottaa käyttöön Web palveluja. Web palveluja voidaan hakea UDDI:sta erilaisin kriteerein, kuten esimerkiksi palvelun sijainnin, yrityksen nimen ja toimialan perusteella. UDDI on myös itsessään Web palvelu, joka perustuu XML kuvauskieleen ja SOAP protokollaan. Työssä paneudutaan tarkemmin UDDI:in. UDDI:ta käsitellään tarkemmin myös teknisesti. Oleellinen osa UDDI:ta on ollut julkaisijoiden ja käyttäjien mielestä tietoturvan puute, joka on rajoittanut huomattavasti UDDI:n käyttöä ja käyttöönottamista. Työssä tarkastellaankin tarkemmin juuri tietoturvaan liittyviä asioita ja ratkaisuja sekä myös UDDI:n merkitystä yrityksille.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The counteranion exchange of quaternary 1,2,3-triazolium salts was examined using a simple method that permitted halide ions to be swap for a variety of anions using an anion exchange resin (A¯ form). The method was applied to 1,2,3-triazolium-based ionic liquids and the iodideto- anion exchange proceeded in excellent to quantitative yields, concomitantly removing halide impurities. Additionally, an anion exchange resin (N3¯ form) was used to obtain the benzyl azide from benzyl halide under mild reaction. Likewise, following a similar protocol, bis(azidomethyl)arenes were also synthesized in excellent yields. The results of a proton NMR spectroscopic study of simple azolium-based ion pairs are discussed, with attention focused on the significance of the charged-assisted (CH)+···anion hydrogen bonds of simple azolium systems such as 1-butyl-3-methylimidazolium and 1-benzyl-3-methyl-1,2,3-triazolium salts.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The counteranion exchange of quaternary 1,2,3-triazolium salts was examined using a simple method that permitted halide ions to be swap for a variety of anions using an anion exchange resin (A¯ form). The method was applied to 1,2,3-triazolium-based ionic liquids and the iodideto- anion exchange proceeded in excellent to quantitative yields, concomitantly removing halide impurities. Additionally, an anion exchange resin (N3¯ form) was used to obtain the benzyl azide from benzyl halide under mild reaction. Likewise, following a similar protocol, bis(azidomethyl)arenes were also synthesized in excellent yields. The results of a proton NMR spectroscopic study of simple azolium-based ion pairs are discussed, with attention focused on the significance of the charged-assisted (CH)+···anion hydrogen bonds of simple azolium systems such as 1-butyl-3-methylimidazolium and 1-benzyl-3-methyl-1,2,3-triazolium salts.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The counteranion exchange of quaternary 1,2,3-triazolium salts was examined using a simple method that permitted halide ions to be swap for a variety of anions using an anion exchange resin (A¯ form). The method was applied to 1,2,3-triazolium-based ionic liquids and the iodideto- anion exchange proceeded in excellent to quantitative yields, concomitantly removing halide impurities. Additionally, an anion exchange resin (N3¯ form) was used to obtain the benzyl azide from benzyl halide under mild reaction. Likewise, following a similar protocol, bis(azidomethyl)arenes were also synthesized in excellent yields. The results of a proton NMR spectroscopic study of simple azolium-based ion pairs are discussed, with attention focused on the significance of the charged-assisted (CH)+···anion hydrogen bonds of simple azolium systems such as 1-butyl-3-methylimidazolium and 1-benzyl-3-methyl-1,2,3-triazolium salts.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Tässä diplomityössä käsitellään palvelukeskeistä arkkitehtuuria ja sen pohjalta vanhaan järjestelmään rakennetun palvelurajapinnan laajentamista avustavan teknologian avulla. Avustavalla teknologialla automatisoidaan vanhan järjestelmän graafisen ohjelman käyttöliittymän toimintoja verkkopalveluksi. Alussa esitellään palvelukeskeisen arkkitehtuurin määritelmä ja sen mukaisia suunnitteluperiaatteita. Sen jälkeen käydään läpi teoriaa, toteutuksia ja lähestymistapoja vanhojen järjestelmien integroimiseksi osaksi palvelukeskeistä arkkitehtuuria. Microsoft Windows-ympäristön tarjoama tuki avustavalle teknologialle käydään läpi. Palvelurajapinnan laajentamisessa käytettiin mustan laatikon menetelmää, jolla vanhan järjestelmän graafinen ohjelma automatisoidaan avustavan teknologian avulla. Menetelmä osoittautui toimivaksi ja sitä voidaan käyttää vanhojen järjestelmien integroimiseksi osaksi palvelukeskeistä arkkitehtuuria

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Tämä työ kertoo Twitch.tv-palvelun videolähetyksien katsomiseen tarkoitetun sovelluksen kehittämisestä. Sovellus on tarkoitettu tablet-laitteille, jotka käyttävät Windows 8 -käyttöjärjestelmää. Tarkoituksena on mahdollistaa palvelun käyttäminen ilman selainta suoraan Windows App Store -sovelluksen kautta. Toteutuksessa keskitytään tutkimaan Microsoftin työkaluja ohjelmistonkehitykseen Windowsille, Twitch:n tarjoaman rajapinnan käyttöä ja käyttömahdollisuuksia. Työssä kerrotaan näiden työkalujen rajoittuneisuudesta ja tästä aiheutuvista ongelmista edellä kuvattua sovellusta kehittäessä. Ohjelmistossa panostetaan käytettävyyteen erityisesti tablet-laitteen näkökulmasta, käyttöliittymän suunnittelussa otetaan huomioon yhtenevä ulkonäkö ja Metro UI:n tyyli.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The recent emergence of a new generation of mobile application marketplaces has changed the business in the mobile ecosystems. The marketplaces have gathered over a million applications by hundreds of thousands of application developers and publishers. Thus, software ecosystems—consisting of developers, consumers and the orchestrator—have emerged as a part of the mobile ecosystem. This dissertation addresses the new challenges faced by mobile application developers in the new ecosystems through empirical methods. By using the theories of two-sided markets and business ecosystems as the basis, the thesis assesses monetization and value creation in the market as well as the impact of electronic Word-of-Mouth (eWOM) and developer multihoming— i. e. contributing for more than one platform—in the ecosystems. The data for the study was collected with web crawling from the three biggest marketplaces: Apple App Store, Google Play and Windows Phone Store. The dissertation consists of six individual articles. The results of the studies show a gap in monetization among the studied applications, while a majority of applications are produced by small or micro-enterprises. The study finds only weak support for the impact of eWOM on the sales of an application in the studied ecosystem. Finally, the study reveals a clear difference in the multi-homing rates between the top application developers and the rest. This has, as discussed in the thesis, an impact on the future market analyses—it seems that the smart device market can sustain several parallel application marketplaces.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This report gives a detailed discussion on the system, algorithms, and techniques that we have applied in order to solve the Web Service Challenges (WSC) of the years 2006 and 2007. These international contests are focused on semantic web service composition. In each challenge of the contests, a repository of web services is given. The input and output parameters of the services in the repository are annotated with semantic concepts. A query to a semantic composition engine contains a set of available input concepts and a set of wanted output concepts. In order to employ an offered service for a requested role, the concepts of the input parameters of the offered operations must be more general than requested (contravariance). In contrast, the concepts of the output parameters of the offered service must be more specific than requested (covariance). The engine should respond to a query by providing a valid composition as fast as possible. We discuss three different methods for web service composition: an uninformed search in form of an IDDFS algorithm, a greedy informed search based on heuristic functions, and a multi-objective genetic algorithm.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Co-training is a semi-supervised learning method that is designed to take advantage of the redundancy that is present when the object to be identified has multiple descriptions. Co-training is known to work well when the multiple descriptions are conditional independent given the class of the object. The presence of multiple descriptions of objects in the form of text, images, audio and video in multimedia applications appears to provide redundancy in the form that may be suitable for co-training. In this paper, we investigate the suitability of utilizing text and image data from the Web for co-training. We perform measurements to find indications of conditional independence in the texts and images obtained from the Web. Our measurements suggest that conditional independence is likely to be present in the data. Our experiments, within a relevance feedback framework to test whether a method that exploits the conditional independence outperforms methods that do not, also indicate that better performance can indeed be obtained by designing algorithms that exploit this form of the redundancy when it is present.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

ka-Map ("ka" as in ka-boom!) is an open source project that is aimed at providing a javascript API for developing highly interactive web-mapping interfaces using features available in modern web browsers. ka-Map currently has a number of interesting features. It sports the usual array of user interface elements such as: interactive, continuous panning without reloading the page; keyboard navigation options (zooming, panning); zooming to pre-set scales; interactive scalebar, legend and keymap support; optional layer control on client side; server side tile caching