961 resultados para Client-server distributed databases
Resumo:
TeliaSoneran älykkään viestintäjärjestelmän kehitysluonnoksella (SME) pilotoidaan prototyyppipalveluita, joiden avulla asiakkaat voivat välittää viestejä matkapuhelimilla sekä tietokoneilla. SME:n peruspalveluita voidaan käyttää SIP-standardin mukaisilla asiakasohjelmilla sekä SME:n omilla WAP- ja WWW-käyttöliittymillä. Käyttäjät voivat nähdä toistensa tilatiedon, muuttaa omaa tilatietoaan sekä lähettää SIP-pikaviestejä, sähköpostiviestejä ja tekstiviestejä. Käyttäjät voivat myös ylläpitää listaa yhteyshenkilöistään, vastaanottaa pikaviestejä ja selata vastaanotettuja viestejä. Diplomityössä käsitellään yleisesti SME-järjestelmän rakennetta ja paneudutaan tutkimaan työssä toteutetun SME:n WWW-asiakasohjelman toteutusta. Diplomityössä käydään läpi projektiin liittyviä standardeja, suosituksia, toteustekniikoita sekä palveluita. Lisäksi tarkastellaan työssä hyödynnettyjä ohjelmointirajapintoja, nykyisiä älypuhelimia sekä niiden Internet-selaimia, jotka rajoittavat WWW-asiakaspalvelun toteutuksessa käytettyjä toteutustekniikkavaihtoehtoja. Lopuksi esitellään toteutettujen ohjelmistojen sisäistä rakennetta ja toimintaa.
Resumo:
The thesis presents an overview of third generation of IP telephony. The architecture of 3G IP Telephony and its components are described. The main goal of the thesis is to investigate the interface between the Call Processing Server and Multimedia IP Networks. The interface functionality, proposed protocol stack and a general description are presented in the thesis. To provide useful services, 3G IP Telephony requires a set of control protocols for connection establishment, capabilities exchange and conference control. The Session Initiation Protocol (SIP) and the H.323 are two protocols that meet these needs. In the thesis these two protocols are investigated and compared in terms of Complexity, Extensibility, Scalability, Services, Resource Utilization and Management.
Resumo:
Virtualisoinnin ideana on kuvata tietotekniikkaan liittyvät laiteresurssit ryhminä. Kun jonkin tehtävän suoritukseen tarvitaan resursseja, ne kerätään erikseen jokaisesta ryhmästä. Virtualisoinnin yksi osa-alue on palvelimen tai palvelinten virtualisointi, jossa pyritään hyödyntämään palvelinlaitteisto mahdollisimman tehokkaasti. Tehokkuus saavutetaan käyttämällä erillisiä instansseja, joita kutsutaan virtuaalikoneiksi. Tässä diplomityössä esitellään ja verrataan erilaisia palvelinten virtualisointimalleja ja tekniikoita, joita voidaan käyttää IA-32 arkkitehtuurin kanssa. Eroa virtualisoinnin ja eri partitiointitekniikoiden välillä tarkastellaan erikseen. Lisäksi muutoksia, joita palvelinten virtualisointi aiheuttaa infrastruktuuriin, ympäristöön ja laitteistoon käsitellään yleisellä tasolla. Teorian oikeellisuutta todistettiin suorittamalla useita testejä käyttäen kahta eri virtualisointiohjelmistoa. Testien perusteella palvelinten virtualisointi vähentää suorituskykyä ja luo ympäristön, jonka hallitseminen on vaikeampaa verrattuna perinteiseen ympäristöön. Myös tietoturvaa on katsottava uudesta näkökulmasta, sillä fyysistä eristystä ei virtuaalikoneille voida toteuttaa. Jotta virtualisoinnista saataisiin mahdollisimman suuri hyöty tuotantoympäristössä, vaaditaan tarkkaa harkintaa ja suunnitelmallisuutta. Parhaat käyttökohteet ovat erilaiset testiympäristöt, joissa vaatimukset suorituskyvyn ja turvallisuuden suhteen eivät ole niin tarkat.
Resumo:
Työssä tarkastellaan yleisellä tasolla sovelluspalvelimia ja väliohjelmistoja sekä niille asetettuja vaatimuksia. Erityisesti käytännön työn ratkaisupohjana käytettyyn CORBA-väliohjelmistoteknologiaan perehdytään huolella. Pääpainon saavat kuitenkin käytännön työssä toteutettavat dynaamiset DII- ja DSI-rajapinnat. Teoriaosan lopussa esitellään käytetty CVOPS-työkalu ja sovelluspalvelin, johon dynaaminen rajapinta lisätään. Dynaamisuustuki lisätään sovelluspalvelimen CVOPS-ORB-järjestelmäkomponenttiin, jonka toiminta ja arkkitehtuuri kuvataan. Käytännön osuus jakautuu dynaamisen rajapinnan eri toteutusvaiheiden esittelyyn ja jatkokehityssuunnitelmiin. Työssä toteutettu dynaaminen kutsu- ja palvelurajapinta mahdollistaa pyyntöjen lähettämisen ja vastaanottamisen dynaamisesti. Se lisää joustavuutta asiakas- ja palvelintoteutuksiin, mutta se on toteutukseltaan monimutkaisempi ja suorituskyvyltään heikompi kuin staattinen rajapinta.
Resumo:
Työn teoriaosuudessa tutustutaan ensin paikkatiedon käsitteeseen ja paikkatietoa hyödyntäviin palveluihin. Lisäksi perehdytään paikannukseen langattomissa lähiverkoissa ja erityisesti paikannukseen tämän diplomityön osalta käytettävässä verkossa. Työn teoriaosuudessa tutustutaan myös paikkatietoa hyödyntävien palveluiden hyöty- sekä haittanäkökulmiin. Teoriaosuudessa käydään myös läpi tällä hetkellä yleisimmät pikaviestintäarkkitehtuurit ja tutustutaan tarkemmin Jabber–pikaviestintäohjelmiston käyttämään protokollaan. Lopuksi tarkastellaan paikkatiedon hyödyntämiseen liittyviä lakiteknisiä seikkoja ja henkilön yksityisyyden suojaa. Diplomityön käytännön osuudessa tutustutaan paikkatietoa hyödyntävän palvelinkomponentin toteutukseen Jabber–arkkitehtuuria hyväksikäyttäen. Jabber-palvelinohjelmisto ja tehty komponentti toimivat langattomassa lähiverkossa (WLPR.NET), jota ylläpitää Lappeenrannan teknillisen yliopiston tietoliikennetekniikan laitos. Verkon käyttäjät voivat rekisteröityä palvelun käyttäjiksi, jonka jälkeen palvelinkomponentti pitää kirjaa rekisteröityneiden käyttäjien paikkatiedosta ja sen muutoksista. Lisäksi käyttäjät voivat hakea muiden käyttäjien paikkatietoa asiakasohjelmistossa toimivan hakutoiminnon avulla. Käyttäjien paikkatieto saadaan käyttämällä jo olemassa olevaa tekniikkaa.
Resumo:
The performance of a hydrologic model depends on the rainfall input data, both spatially and temporally. As the spatial distribution of rainfall exerts a great influence on both runoff volumes and peak flows, the use of a distributed hydrologic model can improve the results in the case of convective rainfall in a basin where the storm area is smaller than the basin area. The aim of this study was to perform a sensitivity analysis of the rainfall time resolution on the results of a distributed hydrologic model in a flash-flood prone basin. Within such a catchment, floods are produced by heavy rainfall events with a large convective component. A second objective of the current paper is the proposal of a methodology that improves the radar rainfall estimation at a higher spatial and temporal resolution. Composite radar data from a network of three C-band radars with 6-min temporal and 2 × 2 km2 spatial resolution were used to feed the RIBS distributed hydrological model. A modification of the Window Probability Matching Method (gauge-adjustment method) was applied to four cases of heavy rainfall to improve the observed rainfall sub-estimation by computing new Z/R relationships for both convective and stratiform reflectivities. An advection correction technique based on the cross-correlation between two consecutive images was introduced to obtain several time resolutions from 1 min to 30 min. The RIBS hydrologic model was calibrated using a probabilistic approach based on a multiobjective methodology for each time resolution. A sensitivity analysis of rainfall time resolution was conducted to find the resolution that best represents the hydrological basin behaviour.
Resumo:
Shortening development times of mobile phones are also accelerating the development times of mobile phone software. New features and software components should be partially implemented and tested before the actual hardware is ready. This brings challenges to software development and testing environments, especially on the user interface side. New features should be able to be tested in an environment, which has a look and feel like a real phone. Simulation environments are used to model real mobile phones. This makes possible to execute software in a mobile phone that does not yet exist. The purpose of this thesis is to integrate Socket Server software component to Series 40 simulation environments on Linux and Windows platforms. Socket Server provides TCP/IP connectivity for applications. All other software and hardware components below Socket Server do not exist in simulation environments. The scope of this work is to clarify how that can be done without connectivity problems, including design, implementation and testing phases.
Resumo:
Els préstecs hipotecaris són préstecs a llarg termini que es tramiten a través d’un banc o qualsevol altra entitat financera i a canvi cobren interessos i prenen el teu habitatge com a garantia del pagament del préstec. La traducció d’aquest tipus de textos pot enfocar-se com una traducció jurada depenent del tipus d’encàrrec del client. És important la informació prèvia a la traducció sobre el tema, tant en la cultura d’origen com en la d’arribada. També s’ha de tenir en compte la quantitat de tecnicismes que ens poden aparèixer en el text i per tant, la necessitat d’utilitzar diccionaris i bases terminològiques especialitzades en el tema. Aquest treball és una traducció i anàlisi traductològic d’un préstec hipotecari de l’Estat de Massachusets al català.
Resumo:
Snow cover is an important control in mountain environments and a shift of the snow-free period triggered by climate warming can strongly impact ecosystem dynamics. Changing snow patterns can have severe effects on alpine plant distribution and diversity. It thus becomes urgent to provide spatially explicit assessments of snow cover changes that can be incorporated into correlative or empirical species distribution models (SDMs). Here, we provide for the first time a with a lower overestimation comparison of two physically based snow distribution models (PREVAH and SnowModel) to produce snow cover maps (SCMs) at a fine spatial resolution in a mountain landscape in Austria. SCMs have been evaluated with SPOT-HRVIR images and predictions of snow water equivalent from the two models with ground measurements. Finally, SCMs of the two models have been compared under a climate warming scenario for the end of the century. The predictive performances of PREVAH and SnowModel were similar when validated with the SPOT images. However, the tendency to overestimate snow cover was slightly lower with SnowModel during the accumulation period, whereas it was lower with PREVAH during the melting period. The rate of true positives during the melting period was two times higher on average with SnowModel with a lower overestimation of snow water equivalent. Our results allow for recommending the use of SnowModel in SDMs because it better captures persisting snow patches at the end of the snow season, which is important when modelling the response of species to long-lasting snow cover and evaluating whether they might survive under climate change.
Resumo:
Internationalization and the following rapid growth have created the need to concentrate the IT systems of many small-to-medium-sized production companies. Enterprise Resource Planning systems are a common solution for such companies. Deployment of these ERP systems consists of many steps, one of which is the implementation of the same shared system at all international subsidiaries. This is also one of the most important steps in the internationalization strategy of the company from the IT point of view. The mechanical process of creating the required connections for the off-shore sites is the easiest and most well-documented step along the way, but the actual value of the system, once operational, is perceived in its operational reliability. The operational reliability of an ERP system is a combination of many factors. These factors vary from hardware- and connectivity-related issues to administrative tasks and communication between decentralized administrative units and sites. To accurately analyze the operational reliability of such system, one must take into consideration the full functionality of the system. This includes not only the mechanical and systematic processes but also the users and their administration. All operational reliability in an international environment relies heavily on hardware and telecommunication adequacy so it is imperative to have resources dimensioned with regard to planned usage. Still with poorly maintained communication/administration schemes no amount of bandwidth or memory will be enough to maintain a productive level of reliability. This thesis work analyzes the implementation of a shared ERP system to an international subsidiary of a Finnish production company. The system is Microsoft Dynamics Ax, currently being introduced to a Slovakian facility, a subsidiary of Peikko Finland Oy. The primary task is to create a feasible base of analysis against which the operational reliability of the system can be evaluated precisely. With a solid analysis the aim is to give recommendations on how future implementations are to be managed.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.