974 resultados para mobile telefonia back-end


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tämä diplomityö käsittelee kahden sotilaallisen koalition yhteentoimivuuden haasteita ja toteutusta kaupallisintyökaluin. Työ pohjautuu kahteen todelliseen, palvelusuuntautunutta arkkitehtuurisuunnittelua (SOA) hyödyntäneeseen integraatioprojektiin jotka on toteutettu Suomen IBM:n Palveluyksikössä vuosina 2006 - 2007. Työn tavoitteena on ollut tutkia sotilaallisten koalitioiden järjestelmä - ja tiedonvaihtoyhteentoimivuuden menetelmiä, näkökulmia ja teknistä toteutusta kaupallisin ohjelmistotuottein ja yhteistä tietomallia käyttäen. Lisäksi esitetään puolustustoimialan erityispiirteet tietojärjestelmätoimittajien ohjelmistokehitys - prosesseihin liittyen. Tätä varten tutkittiin koalitioiden käyttöön tarkoitettuja olemassa olevia ohjelmistoarkkitehtuureja ja yhteentoimivuusmalleja sekä sovitettiin niitä SOA - arkkitehtuuriajatteluun. Työn teoreettisena pohjana käytettiin organisatorisen ja teknisen yhteentoimivuuden kuvaavaa Layers of Coalition Interoperability (LCI) - mallia, minkä jälkeen mallin teknistä osiota käytettiin pohjana SOA - palveluihin perustuvan esimerkkijärjestelmän kehittämiseen kahden kuvitteellisen koalition tiedonvaihtoa varten. Työn keskeisinä tuloksina on syntynyt suunnitelma koalitioiden taustajärjestelmien yhdistämisestä dynaamisten SOA- palveluiden avulla yhteiseen JC3IEDM - tietomalliin. Tämä tietomalli vuorostaan antaa mahdollisuuden järjestelmän laajentamiseen esimerkiksi avustusjärjestöjen, poliisivoimien ja terveydenhuollon organisaatioiden tarpeisiin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Technological progress has made a huge amount of data available at increasing spatial and spectral resolutions. Therefore, the compression of hyperspectral data is an area of active research. In somefields, the original quality of a hyperspectral image cannot be compromised andin these cases, lossless compression is mandatory. The main goal of this thesisis to provide improved methods for the lossless compression of hyperspectral images. Both prediction- and transform-based methods are studied. Two kinds of prediction based methods are being studied. In the first method the spectra of a hyperspectral image are first clustered and and an optimized linear predictor is calculated for each cluster. In the second prediction method linear prediction coefficients are not fixed but are recalculated for each pixel. A parallel implementation of the above-mentioned linear prediction method is also presented. Also,two transform-based methods are being presented. Vector Quantization (VQ) was used together with a new coding of the residual image. In addition we have developed a new back end for a compression method utilizing Principal Component Analysis (PCA) and Integer Wavelet Transform (IWT). The performance of the compressionmethods are compared to that of other compression methods. The results show that the proposed linear prediction methods outperform the previous methods. In addition, a novel fast exact nearest-neighbor search method is developed. The search method is used to speed up the Linde-Buzo-Gray (LBG) clustering method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diplomityön tavoitteena oli kokeellisen tutkimuksen keinoin selvittää juotettujen levylämmönsiirtimien levypakkarakenteessa virtausten käyttäytyminen ja jakautuminen sekä löytää ideoita ja kehitysehdotuksia levylämmönsiirtimen levypakan ja levyprofiilin kehittämiseksi. Kokeellinen tutkimus suoritettiin Oy Danfoss Ab LPM:n levylämmönsiirtimien tutkimuslaboratoriossa. Virtausjakauman tutkimusta varten suunniteltiin ja valittiin tutkimuslaitteisto, joka koostui termoelementtiantureista, tiedonkeruulaitteistosta sekä ohjelmistosta. Lämmönsiirtimistä mitattiin ensiö- ja toisiopuolen tilavuusvirrat ja painehäviöt sekä lämpötilat ennen ja jälkeen lämmönsiirtimen. Tutkimuslaitteiston avulla mitattiin lämpötiloja lämmönsiirtimen sisältä levyväleistä. Mittaukset suoritettiin neljällä levypakkarakenteella useilla massavirran arvoilla. Mittaustuloksista määritettiin levylämmönsiirtimien lämpö- ja virtaustekniset ominaisuudet nesteen Reynoldsin luvun funktiona sekä selvitettiin nesteen virtausjakaumat. Mittaustuloksien perusteella laskettuja virtausjakauman arvoja verrattiin teorian mukaan laskettuihin jakaumiin. Mitatuista siirtimistä lasketut massavirrat viittaavat siihen, että suurin osa nesteestä virtaa siirtimien keskeltä tai lähempää loppupäätä kuin alkupäästä. Teorian mukaan suurin nestemäärä virtaisi siirtimen alkupäästä vähentyen tasaisesti kohti levypakan loppupäätä. Teorian mukaiselle virtausjakaumalle ja lasketuille jakaumille ei löydetty yhteyttä. Tutkimuksessa havaittiin suuria, jopa yli 20 asteen, lämpötilaeroja levyväleistä ulostulevissa virtauksissa. Levyvälien virtauksen käyttäytymisen ja jakautumisen tutkiminen nähdäänkin levypakan pitkittäistä kehittämistä suurempana mielenkiinnon ja kehittämisen kohteena.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tutkielman tavoitteena on selvittää sähköisten tavaran toimitusasiakirjojen käytön vaikutukset hankinta- ja logistiikkapalveluita tarjoavan yrityksen toimitusketjun loppupään toimintoihin. Lisäksi työssä rakennetaan konstruktiivisen tutkimusmenetelmän viitekehyksen mukainen toimintamalli yrityksen käyttöä varten. Toimintamallin avulla tarkastellaan kohdeyrityksen mahdollisuutta parantaa toimitusketjun läpinäkyvyyttä, tehostaa sidosryhmien välistä tiedonsiirtoa ja vähentää virheitä ja manuaalista työtä nykyisissä prosesseissa. Työn keskeisin sisältö rakentuu kohdeyrityksen käyttämille kuljetusyrityksille tehdyn kuljetusliikekyselyn, tarkkaan kuvatun nykytilan sekä käytännön ongelmaan rakennetun konstruktion ympärille. Kuljetusliikekyselyn avulla saadaan käyttökelpoinen kuva kuljetusyritysten nykytilasta sekä tulevaisuuden suunnitelmista sähköisiin tavaran toimitusasiakirjoihin liittyen. Nykytilan kuvauksella nostetaan esiin prosesseja ja toimintoja joiden toimintaan voidaan oleellisesti vaikuttaa informaatioteknologian sekä prosessimuutosten avulla. Konstruktion rakentamisessa pääpaino on suunnattu läpinäkyvän ja tehokkaan toimitusketjun aikaansaamisessa. Tutkielman konstruktiota, sähköisten tavaran toimitusasiakirjojen toimintamallia, tarkastellaan yrityksen liiketoiminnan näkökulmasta peilaamalla oletettuja parannuksia aikaisempaan teoriapohjaan. Lisäksi toimintamallin soveltumista reaalimaailman ongelman ratkaisuun arvioidaan diplomityön ohjausryhmästä koostuvan johtajapaneelin tekemällä SWOT-analyysillä sekä heikolla markkinatestillä. Sidosryhmien välisen toiminnan tehostumista arvioidaan kvalitatiivisin menetelmin

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Työn tavoitteena oli tutkia innovaatioita ja organisaation innovaatiokyvykkyyttä, innovaatiokyvykkyyden taustatekijöitä sekä innovaatioprosessin alkupään (Fuzzy Front End, FFE) sekä siinä tapahtuvan päätöksenteon johtamista. Lisäksi tavoitteena oli suunnitella innovaatioprosessin alkupään toimintamalli selkeyttämään toimintaa prosessin alkupäässä sekä antaa toimenpide-ehdotuksia ja suosituksia. Tutkimuksen teoriaosuus tehtiin kirjallisuustutkimuksena. Tutkimuksen empiirinen osuus suoritettiin case -analyysinä yrityksen henkilöhaastattelu- ja toimintatutkimuksen muodossa. Innovaatioprosessin alkupäähän on tunnistettu toimintamalleja, joilla selkeytetään ja tehostetaan prosessin alkupään vaiheita. Vaiheet ovat mahdollisuuksien tunnistaminen, mahdollisuuksien analysointi, ideointi, ideoiden valitseminen ja konsepti- ja teknologiakehitys. Innovaatioprosessin rinnalla kulkee päätöksenteon prosessi, jonka suhteen tunnistetaan selkeät päätöksentekokohdat ja kriteerit prosessissa etenemiselle. Innovaatio- ja päätöksentekoprosessiin osallistuu eri vaiheissa sekä yrityksen sisäiset, kuten henkilöstö, että ulkoiset, kuten asiakkaat, toimittajat ja verkostokumppanit, sidosryhmät. Lisäksi innovaatioprosessin toimintaan vaikuttavat johdon tuki ja sitoutuminen, osallistujien kyky luovuuteen sekä muut innovaatiokyvykkyyden taustatekijät. Kaikki nämä tekijät tulee huomioida innovaatioprosessin alkupään mallia suunniteltaessa. Tutkimus tehtiin tietoliikennealan yrityksen tarpeisiin. Yrityksessä on käytössä aloitetoimintaa, mutta sen ei koeta tarjoavan riittävästi ideoita yrityksen tuotekehityksen tarpeisiin. Yrityksen henkilöstön innovaatiopotentiaali on suuri, mikä halutaan hyödyntää paremmin suunnittelemalla yrityksen käyttöön soveltuva, innovaatioprosessin alkupään toimintaan ohjaava, vakioitu ja henkilöstöä ja muita yhteistyötahoja, kuten asiakkaita, osallistava toimintamalli. Toimenpide-ehdotuksina ja suosituksina esitetään innovaatioprosessin alkupään johtamisen toimintamallia. Esitetyssä mallissa määritellään vaiheet, menetelmät, päätöksenteko ja vastuut. Toimintamalli esitetään soveltuen yhdistettäväksi yrityksessä käytössä olevaan innovaatioprosessin loppupään, tuotekehitysprojektien läpiviemisen, malliin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The emerging technologies have recently challenged the libraries to reconsider their role as a mere mediator between the collections, researchers, and wider audiences (Sula, 2013), and libraries, especially the nationwide institutions like national libraries, haven’t always managed to face the challenge (Nygren et al., 2014). In the Digitization Project of Kindred Languages, the National Library of Finland has become a node that connects the partners to interplay and work for shared goals and objectives. In this paper, I will be drawing a picture of the crowdsourcing methods that have been established during the project to support both linguistic research and lingual diversity. The National Library of Finland has been executing the Digitization Project of Kindred Languages since 2012. The project seeks to digitize and publish approximately 1,200 monograph titles and more than 100 newspapers titles in various, and in some cases endangered Uralic languages. Once the digitization has been completed in 2015, the Fenno-Ugrica online collection will consist of 110,000 monograph pages and around 90,000 newspaper pages to which all users will have open access regardless of their place of residence. The majority of the digitized literature was originally published in the 1920s and 1930s in the Soviet Union, and it was the genesis and consolidation period of literary languages. This was the era when many Uralic languages were converted into media of popular education, enlightenment, and dissemination of information pertinent to the developing political agenda of the Soviet state. The ‘deluge’ of popular literature in the 1920s to 1930s suddenly challenged the lexical orthographic norms of the limited ecclesiastical publications from the 1880s onward. Newspapers were now written in orthographies and in word forms that the locals would understand. Textbooks were written to address the separate needs of both adults and children. New concepts were introduced in the language. This was the beginning of a renaissance and period of enlightenment (Rueter, 2013). The linguistically oriented population can also find writings to their delight, especially lexical items specific to a given publication, and orthographically documented specifics of phonetics. The project is financially supported by the Kone Foundation in Helsinki and is part of the Foundation’s Language Programme. One of the key objectives of the Kone Foundation Language Programme is to support a culture of openness and interaction in linguistic research, but also to promote citizen science as a tool for the participation of the language community in research. In addition to sharing this aspiration, our objective within the Language Programme is to make sure that old and new corpora in Uralic languages are made available for the open and interactive use of the academic community as well as the language societies. Wordlists are available in 17 languages, but without tokenization, lemmatization, and so on. This approach was verified with the scholars, and we consider the wordlists as raw data for linguists. Our data is used for creating the morphological analyzers and online dictionaries at the Helsinki and Tromsø Universities, for instance. In order to reach the targets, we will produce not only the digitized materials but also their development tools for supporting linguistic research and citizen science. The Digitization Project of Kindred Languages is thus linked with the research of language technology. The mission is to improve the usage and usability of digitized content. During the project, we have advanced methods that will refine the raw data for further use, especially in the linguistic research. How does the library meet the objectives, which appears to be beyond its traditional playground? The written materials from this period are a gold mine, so how could we retrieve these hidden treasures of languages out of the stack that contains more than 200,000 pages of literature in various Uralic languages? The problem is that the machined-encoded text (OCR) contains often too many mistakes to be used as such in research. The mistakes in OCRed texts must be corrected. For enhancing the OCRed texts, the National Library of Finland developed an open-source code OCR editor that enabled the editing of machine-encoded text for the benefit of linguistic research. This tool was necessary to implement, since these rare and peripheral prints did often include already perished characters, which are sadly neglected by the modern OCR software developers, but belong to the historical context of kindred languages and thus are an essential part of the linguistic heritage (van Hemel, 2014). Our crowdsourcing tool application is essentially an editor of Alto XML format. It consists of a back-end for managing users, permissions, and files, communicating through a REST API with a front-end interface—that is, the actual editor for correcting the OCRed text. The enhanced XML files can be retrieved from the Fenno-Ugrica collection for further purposes. Could the crowd do this work to support the academic research? The challenge in crowdsourcing lies in its nature. The targets in the traditional crowdsourcing have often been split into several microtasks that do not require any special skills from the anonymous people, a faceless crowd. This way of crowdsourcing may produce quantitative results, but from the research’s point of view, there is a danger that the needs of linguists are not necessarily met. Also, the remarkable downside is the lack of shared goal or the social affinity. There is no reward in the traditional methods of crowdsourcing (de Boer et al., 2012). Also, there has been criticism that digital humanities makes the humanities too data-driven and oriented towards quantitative methods, losing the values of critical qualitative methods (Fish, 2012). And on top of that, the downsides of the traditional crowdsourcing become more imminent when you leave the Anglophone world. Our potential crowd is geographically scattered in Russia. This crowd is linguistically heterogeneous, speaking 17 different languages. In many cases languages are close to extinction or longing for language revitalization, and the native speakers do not always have Internet access, so an open call for crowdsourcing would not have produced appeasing results for linguists. Thus, one has to identify carefully the potential niches to complete the needed tasks. When using the help of a crowd in a project that is aiming to support both linguistic research and survival of endangered languages, the approach has to be a different one. In nichesourcing, the tasks are distributed amongst a small crowd of citizen scientists (communities). Although communities provide smaller pools to draw resources, their specific richness in skill is suited for complex tasks with high-quality product expectations found in nichesourcing. Communities have a purpose and identity, and their regular interaction engenders social trust and reputation. These communities can correspond to research more precisely (de Boer et al., 2012). Instead of repetitive and rather trivial tasks, we are trying to utilize the knowledge and skills of citizen scientists to provide qualitative results. In nichesourcing, we hand in such assignments that would precisely fill the gaps in linguistic research. A typical task would be editing and collecting the words in such fields of vocabularies where the researchers do require more information. For instance, there is lack of Hill Mari words and terminology in anatomy. We have digitized the books in medicine, and we could try to track the words related to human organs by assigning the citizen scientists to edit and collect words with the OCR editor. From the nichesourcing’s perspective, it is essential that altruism play a central role when the language communities are involved. In nichesourcing, our goal is to reach a certain level of interplay, where the language communities would benefit from the results. For instance, the corrected words in Ingrian will be added to an online dictionary, which is made freely available for the public, so the society can benefit, too. This objective of interplay can be understood as an aspiration to support the endangered languages and the maintenance of lingual diversity, but also as a servant of ‘two masters’: research and society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the growth in new technologies, using online tools have become an everyday lifestyle. It has a greater impact on researchers as the data obtained from various experiments needs to be analyzed and knowledge of programming has become mandatory even for pure biologists. Hence, VTT came up with a new tool, R Executables (REX) which is a web application designed to provide a graphical interface for biological data functions like Image analysis, Gene expression data analysis, plotting, disease and control studies etc., which employs R functions to provide results. REX provides a user interactive application for the biologists to directly enter the values and run the required analysis with a single click. The program processes the given data in the background and prints results rapidly. Due to growth of data and load on server, the interface has gained problems concerning time consumption, poor GUI, data storage issues, security, minimal user interactive experience and crashes with large amount of data. This thesis handles the methods by which these problems were resolved and made REX a better application for the future. The old REX was developed using Python Django and now, a new programming language, Vaadin has been implemented. Vaadin is a Java framework for developing web applications and the programming language is extremely similar to Java with new rich components. Vaadin provides better security, better speed, good and interactive interface. In this thesis, subset functionalities of REX was selected which includes IST bulk plotting and image segmentation and implemented those using Vaadin. A code of 662 lines was programmed by me which included Vaadin as the front-end handler while R language was used for back-end data retrieval, computing and plotting. The application is optimized to allow further functionalities to be migrated with ease from old REX. Future development is focused on including Hight throughput screening functions along with gene expression database handling

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The next generations of both biological engineering and computer engineering demand that control be exerted at the molecular level. Creating, characterizing and controlling synthetic biological systems may provide us with the ability to build cells that are capable of a plethora of activities, from computation to synthesizing nanostructures. To develop these systems, we must have a set of tools not only for synthesizing systems, but also designing and simulating them. The BioJADE project provides a comprehensive, extensible design and simulation platform for synthetic biology. BioJADE is a graphical design tool built in Java, utilizing a database back end, and supports a range of simulations using an XML communication protocol. BioJADE currently supports a library of over 100 parts with which it can compile designs into actual DNA, and then generate synthesis instructions to build the physical parts. The BioJADE project contributes several tools to Synthetic Biology. BioJADE in itself is a powerful tool for synthetic biology designers. Additionally, we developed and now make use of a centralized BioBricks repository, which enables the sharing of BioBrick components between researchers, and vastly reduces the barriers to entry for aspiring Synthetic Biologists.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Getting content from server to client can be more complicated than we have discussed so far. This lecture discusses how caching and content delivery networks help to make the Web work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper presents how workflow-oriented, single-user Grid portals could be extended to meet the requirements of users with collaborative needs. Through collaborative Grid portals different research and engineering teams would be able to share knowledge and resources. At the same time the workflow concept assures that the shared knowledge and computational capacity is aggregated to achieve the high-level goals of the group. The paper discusses the different issues collaborative support requires from Grid portal environments during the different phases of the workflow-oriented development work. While in the design period the most important task of the portal is to provide consistent and fault tolerant data management, during the workflow execution it must act upon the security framework its back-end Grids are built on.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Web services are loosely coupled applications that use XML documents as a way of integrating distinct systems on the internet. Such documents are used by in standards such as SOAP, WSDL and UDDI which establish, respectively, integrated patterns for the representation of messages, description, and publication of services, thus facilitating the interoperability between heterogeneous systems. Often one single service does not meet the users needs, therefore new systems can be designed from the composition of two or more services. This which is the design goal behind the of the Service Oriented Architecture. Parallel to this scenario, we have the PEWS (Predicate Path-Expressions for Web Services) language, which speci es behavioural speci cations of composite web service interfaces.. The development of the PEWS language is divided into two parts: front-end and back-end. From a PEWS program, the front-end performs the lexical analysis, syntactic and semantic compositions and nally generate XML code. The function of the back-end is to execute the composition PEWS. This master's dissertation work aims to: (i) reformulate the proposed architecture for the runtime system of the language, (ii) Implement the back-end for PEWS by using .NET Framework tools to execute PEWS programs using the Windows Work ow Foundation

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a slow-wave MEMS phase shifter that can be fabricated using the CMOS back-end and an additional maskless post-process etch. The tunable phase shifter concept is formed by a conventional slow-wave transmission line. The metallic ribbons that form the patterned floating shield of this type of structure are released to allow motion when a control voltage is applied, which changes the characteristic impedance and the phase velocity. For this device a quality factor greater than 40 can be maintained, resulting in a figure of merit on the order of 0.7 dB/360 degrees and a total area smaller than 0.14 mm(2) for a 60-GHz working frequency. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The sustained demand for faster,more powerful chips has beenmet by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the on-chip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation focuses on all of the above points, by describing a NoC architectural implementation called ×pipes; a NoC simulation environment within a cycle-accurate MPSoC emulator called MPARM; a NoC design flow consisting of a front-end tool for optimal NoC instantiation, called SunFloor, and a set of back-end facilities for the study of NoC physical implementations. This dissertation proves the viability of NoCs for current and upcoming designs, by outlining their advantages (alongwith a fewtradeoffs) and by providing a full NoC implementation framework. It also presents some examples of additional extensions of NoCs, allowing e.g. for increased fault tolerance, and outlines where NoCsmay find further application scenarios, such as in stacked chips.