954 resultados para 080302 Computer System Architecture
Resumo:
Langattomat lähiverkot ovat viime vuosikymmeninä saavuttaneet suuren suosion. Tässä työssä käsitellään käyttäjien todentamisjärjestelmän suunnittelua ja kehitystä langattomaan monioperaattoriverkkoon. Langattomassa monioperaattoriverkossa käyttäjillä on mahdollisuus käyttää eri operaattoreiden palveluita. Aluksi käsitellään olemassa olevia todentamismenetelmiä ja -järjestelmiä. minkä jälkeen kuvaillaan todentamisjärjestelmä langattomille monioperaattoriverkoille. Todentamisjärjestelmän ratkaisuvaihtoehtoja esitellään kaksi, niin sanotut moni- istunto - ja yksittäisistuntomalli. Moni-istuntomalli on normaali lähestymistapa käyttäjien todentamiseen tietokonejärjestelmissä. Siinä käyttäjän pitää tunnistautua ja todentaa itsensä jokaiselle verkon palvelulle erikseen. Yksittäisistuntomallissa pyritään parempaan luotettavuuteen ja käytettävyyteen. Siinä käyttäjä todentaa itsensä vain kerran ja voi sen jälkeen päästä useisiin palveluihin. Työn loppuosassa kuvaillaan suunnitellun järjestelmän toteutusta. Lisäksi ehdotetaan vaihtoehtoisia toteutustapoja, analysoidaan järjestelmän heikkouksia ja kerrotaan jatkokehitysmahdoillisuuksista.
Resumo:
Tämä työ esittelee uuden tarjota paikasta riippuvaa tietoa langattomien tietoverkkojen käyttäjille. Tieto välitetään jokaiselle käyttäjälle tietämättä mitään käyttäjän henkilöllisyydestä. Sovellustason protokollaksi valittiin HTTP, joka mahdollistaa tämän järjestelmän saattaa tietoa perille useimmille käyttäjille, jotka käyttävät hyvinkin erilaisia päätelaitteita. Tämä järjestelmä toimii sieppaavan www-liikenteen välityspalvelimen jatkeena. Erilaisten tietokantojen sisällä on perusteella järjestelmä päättää välitetäänkö tietoa vai ei. Järjestelmä sisältää myös yksinkertaisen ohjelmiston käyttäjien paikantamiseksi yksittäisen tukiaseman tarkkuudella. Vaikka esitetty ratkaisu tähtääkin paikkaan perustuvien mainosten tarjoamiseen, se on helposti muunnettavissa minkä tahansa tyyppisen tiedon välittämiseen käyttäjille.
Resumo:
In this study the performance measurement, a part of the research and development of the RNC, was improved by implementing counter testing to the Nokia Automation System. The automation of counter testing is a feature the customer ordered, because performing counter testing manually is rather complex. The objective was to implement an automated counter testing system, which once configured correctly, would manage to run the testing and perform the analysis. The requirements for the counter testing were first studied. It was investigated if the auto-mation of the feature was feasible in the meetings with the customer. The basic functionality required for the automation was also drawn. The technologies used in the architecture of the Nokia Automation System were studied. Based on the results of the study, a new technology, wxWidgets, was introduced. The new technology was necessary to facilitate the implementing of the required feature. Finally the implementation of the counter testing was defined and implemented. The result of this study was the automation of the counter testing method developed as a new feature for the Nokia Automation System. The feature meets the specifications and requirements set by the customer. The performing of the counter testing feature is totally automated. Only configuration of the test cases is done by the user. The customer has presented new requests to further develop the feature and there are plans by the Nokia Automation System developers to implement those in the near future. The study describes the implementation of the counter testing feature introduced. The results of the study give guidelines for further developing the feature.
Resumo:
Background: Antiretroviral therapy has changed the natural history of human immunodeficiency virus (HIV) infection in developed countries, where it has become a chronic disease. This clinical scenario requires a new approach to simplify follow-up appointments and facilitate access to healthcare professionals. Methodology: We developed a new internet-based home care model covering the entire management of chronic HIV-infected patients. This was called Virtual Hospital. We report the results of a prospective randomised study performed over two years, comparing standard care received by HIV-infected patients with Virtual Hospital care. HIV-infected patients with access to a computer and broadband were randomised to be monitored either through Virtual Hospital (Arm I) or through standard care at the day hospital (Arm II). After one year of follow up, patients switched their care to the other arm. Virtual Hospital offered four main services: Virtual Consultations, Telepharmacy, Virtual Library and Virtual Community. A technical and clinical evaluation of Virtual Hospital was carried out. Findings: Of the 83 randomised patients, 42 were monitored during the first year through Virtual Hospital (Arm I) and 41 through standard care (Arm II). Baseline characteristics of patients were similar in the two arms. The level of technical satisfaction with the virtual system was high: 85% of patients considered that Virtual Hospital improved their access to clinical data and they felt comfortable with the videoconference system. Neither clinical parameters [level of CD4 + T lymphocytes, proportion of patients with an undetectable level of viral load (p = 0.21) and compliance levels 90% (p = 0.58)] nor the evaluation of quality of life or psychological questionnaires changed significantly between the two types of care. Conclusions: Virtual Hospital is a feasible and safe tool for the multidisciplinary home care of chronic HIV patients. Telemedicine should be considered as an appropriate support service for the management of chronic HIV infection.
Resumo:
As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.
Resumo:
This thesis seeks to answer, if communication challenges in virtual teams can be overcome with the help of computer-mediated communication. Virtual teams are becoming more common work method in many global companies. In order for virtual teams to reach their maximum potential, effective asynchronous and synchronous methods for communication are needed. The thesis covers communication in virtual teams, as well as leadership and trust building in virtual environments with the help of CMC. First, the communication challenges in virtual teams are identified by using a framework of knowledge sharing barriers in virtual teams by Rosen et al. (2007) Secondly, the leadership and trust in virtual teams are defined in the context of CMC. The performance of virtual teams is evaluated in the case study by exploiting these three dimensions. With the help of a case study of two virtual teams, the practical issues related to selecting and implementing communication technologies as well as overcoming knowledge sharing barriers is being discussed. The case studies involve a complex inter-organisational setting, where four companies are working together in order to maintain a new IT system. The communication difficulties are related to inadequate communication technologies, lack of trust and the undefined relationships of the stakeholders and the team members. As a result, it is suggested that communication technologies are needed in order to improve the virtual team performance, but are not however solely capable of solving the communication challenges in virtual teams. In addition, suitable leadership and trust between team members are required in order to improve the knowledge sharing and communication in virtual teams.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
This work describes different possibilities of protection and control system improvement of primary distribution substation. The status of condition and main problems of power networks from reliability point of view in Russia are described. This work studies technologies used today in Russia for protection of distribution networks with their disadvantages. Majority of medium voltage networks (6-35 kV) has isolated network point. There is still no any protection available on the market which allows to estimate distance to fault in case of earth fault. The thesis analyses methods of earth fault distance calculation. On the basis of computer simulation the influence of various factors on calculation accuracy is studied. The practical implementation of the method presupposes usage of digital relay. Application of digital relay is accompanied by numerous opportunities which are described in this work. Also advantages of system implemented on the basis of IEC 61850 standard are examined. Finally, suitability of modern digital relays from GOST standard point of view is analyzed.
Resumo:
Peer-reviewed
Resumo:
The activated sludge process - the main biological technology usually applied towastewater treatment plants (WWTP) - directly depends on live beings (microorganisms), and therefore on unforeseen changes produced by them. It could be possible to get a good plant operation if the supervisory control system is able to react to the changes and deviations in the system and can take thenecessary actions to restore the system’s performance. These decisions are oftenbased both on physical, chemical, microbiological principles (suitable to bemodelled by conventional control algorithms) and on some knowledge (suitable to be modelled by knowledge-based systems). But one of the key problems in knowledge-based control systems design is the development of an architecture able to manage efficiently the different elements of the process (integrated architecture), to learn from previous cases (spec@c experimental knowledge) and to acquire the domain knowledge (general expert knowledge). These problems increase when the process belongs to an ill-structured domain and is composed of several complex operational units. Therefore, an integrated and distributed AIarchitecture seems to be a good choice. This paper proposes an integrated and distributed supervisory multi-level architecture for the supervision of WWTP, that overcomes some of the main troubles of classical control techniques and those of knowledge-based systems applied to real world systems
Resumo:
This thesis considers aspects related to the design and standardisation of transmission systems for wireless broadcasting, comprising terrestrial and mobile reception. The purpose is to identify which factors influence the technical decisions and what issues could be better considered in the design process in order to assess different use cases, service scenarios and end-user quality. Further, the necessity of cross-layer optimisation for efficient data transmission is emphasised and means to take this into consideration are suggested. The work is mainly related terrestrial and mobile digital video broadcasting systems but many of the findings can be generalised also to other transmission systems and design processes. The work has led to three main conclusions. First, it is discovered that there are no sufficiently accurate error criteria for measuring the subjective perceived audiovisual quality that could be utilised in transmission system design. Means for designing new error criteria for mobile TV (television) services are suggested and similar work related to other services is recommended. Second, it is suggested that in addition to commercial requirements there should be technical requirements setting the frame work for the design process of a new transmission system. The technical requirements should include the assessed reception conditions, technical quality of service and service functionalities. Reception conditions comprise radio channel models, receiver types and antenna types. Technical quality of service consists of bandwidth, timeliness and reliability. Of these, the thesis focuses on radio channel models and errorcriteria (reliability) as two of the most important design challenges and provides means to optimise transmission parameters based on these. Third, the thesis argues that the most favourable development for wireless broadcasting would be a single system suitable for all scenarios of wireless broadcasting. It is claimed that there are no major technical obstacles to achieve this and that the recently published second generation digital terrestrial television broadcasting system provides a good basis. The challenges and opportunities of a universal wireless broadcasting system are discussed mainly from technical but briefly also from commercial and regulatory aspect
Resumo:
Verkostosodankäynti on suuren huomion kohteena useiden maiden puolustusvoimien järjestelmäkehityshankkeissa. Verkostosodankäynnin tavoitteena on kytkeä kaikki taistelukentän komponentit yhteen nopean tiedonsiirtoverkon avulla. Tällä pyritään tehokkaampaan tiedonjakoon ja edelleen resurssien tehokkaampaan käyttöön. Keskeisessä osassa verkostosodankäynnin tavoitteiden saavuttamisessa on palvelukeskeinen arkkitehtuuri (SOA). Tarve yhä monimutkaisemmille tietojärjestelmille pakottaa myös sotilasympäristön toimijat etsimään ratkaisuja valmiista kaupallisista toteutuksista. Verkottunut toiminta tuottaa valtavasti erilaista tilannetietoa. Tilannetiedon pohjalta muodostetaan erilaisia tilannekuvia, joita johtajat käyttävät päätöksentekonsa tukena. Työssä tutkitaan kaupallisen mashup-alustan käyttöä tilannekuvan luomiseen. Mashup-alusta on tietojärjestelmä, jolla voidaan helposti ja nopeasti integroida useista lähteistä saatavaa informaatiota. Mashup-alusta mahdollistaa niin kutsuttujen käyttäjämääriteltyjen tilannekuvien luomisen. Työn tuloksena mashup-alustan soveltuvuus tähän käyttöön on hyvä ja se soveltuu hyvin erityisesti tilanteisiin, joissa vaaditaan nopeita ratkaisuja. Jatkotutkimusta aiheesta tarvitaan, koska mashupalustan käyttöä sotilaallisissa tietojärjestelmissä ei ole juurikaan tutkittu ja aihe on suhteellisen uusi myös tiedeyhteisössä.
Resumo:
As part of the Affective Computing research field, the development of automatic affective recognition systems can enhance human-computer interactions by allowing the creation of interfaces that react to the user's emotional state. To that end, this Master Thesis brings affect recognition to nowadays most used human computer interface, mobile devices, by developing a facial expression recognition system able to perform detection under the difficult conditions of viewing angle and illumination that entails the interaction with a mobile device. Moreover, this Master Thesis proposes to combine emotional features detected from expression with contextual information of the current situation, to infer a complex and extensive emotional state of the user. Thus, a cognitive computational model of emotion is defined that provides a multicomponential affective state of the user through the integration of the detected emotional features into appraisal processes. In order to account for individual differences in the emotional experience, these processes can be adapted to the culture and personality of the user.
Resumo:
To coordinate ambulances for emergency medical services, a multiagent system uses an auction mechanism based on trust. Results of tests using real data show that this system can efficiently assign ambulances to patients, thereby reducing transportation time. Emergency transportation on specialized vehicles is needed when a person's health is in risk of irreparable damage. A patient can't benefit from sophisticated medical treatments and technologies if she or he isn't placed in a proper healthcare center with the appropriate medical team. For example, strokes are neurological emergencies involving a limited amount of time in which treatment measures are effective
Resumo:
This thesis introduces a real-time simulation environment based on the multibody simulation approach. The environment consists of components that are used in conventional product development, including computer aided drawing, visualization, dynamic simulation and finite element software architecture, data transfer and haptics. These components are combined to perform as a coupled system on one platform. The environment is used to simulate mobile and industrial machines at different stages of a product life time. Consequently, the demands of the simulated scenarios vary. In this thesis, a real-time simulation environment based on the multibody approach is used to study a reel mechanism of a paper machine and a gantry crane. These case systems are used to demonstrate the usability of the real-time simulation environment for fault detection purposes and in the context of a training simulator. In order to describe the dynamical performance of a mobile or industrial machine, the nonlinear equations of motion must be defined. In this thesis, the dynamical behaviour of machines is modelled using the multibody simulation approach. A multibody system may consist of rigid and flexible bodies which are joined using kinematic joint constraints while force components are used to describe the actuators. The strength of multibody dynamics relies upon its ability to describe nonlinearities arising from wearing of the components, friction, large rotations or contact forces in a systematic manner. For this reason, the interfaces between subsystems such as mechanics, hydraulics and control systems of the mechatronic machine can be defined and analyzed in a straightforward manner.