986 resultados para Computer users
Resumo:
Tämä työ esittelee uuden tarjota paikasta riippuvaa tietoa langattomien tietoverkkojen käyttäjille. Tieto välitetään jokaiselle käyttäjälle tietämättä mitään käyttäjän henkilöllisyydestä. Sovellustason protokollaksi valittiin HTTP, joka mahdollistaa tämän järjestelmän saattaa tietoa perille useimmille käyttäjille, jotka käyttävät hyvinkin erilaisia päätelaitteita. Tämä järjestelmä toimii sieppaavan www-liikenteen välityspalvelimen jatkeena. Erilaisten tietokantojen sisällä on perusteella järjestelmä päättää välitetäänkö tietoa vai ei. Järjestelmä sisältää myös yksinkertaisen ohjelmiston käyttäjien paikantamiseksi yksittäisen tukiaseman tarkkuudella. Vaikka esitetty ratkaisu tähtääkin paikkaan perustuvien mainosten tarjoamiseen, se on helposti muunnettavissa minkä tahansa tyyppisen tiedon välittämiseen käyttäjille.
Resumo:
Introduction: Frequent emergency department (ED) users are often vulnerable patients with many risk factors affecting their quality of life (QoL). The aim of this study was to examine to what extent a case management intervention improved frequent ED users' QoL. Methods: Data were part of a randomized, controlled trial designed to improve frequent ED users' QoL at the Lausanne University Hospital. A total of 194 frequent ED users (≥ 5 attendances during the previous 12 months; ≥ 18 years of age) were randomly assigned to the control or the intervention group. Participants in the intervention group received a case management intervention (i.e. counseling and assistance concerning social determinants of health, substance-use disorders, and access to the health-care system). QoL was evaluated using the WHOQOL-BREF at baseline and twelve months later. Four dimensions of QoL were retained: physical health, psychological health, social relationship, and environment, with scores ranging from 0 (low QoL) to 100 (high QoL).
Resumo:
Background: In most of the emergency departments (ED) in developed countries, a subset of patients visits the ED frequently. Despite their small numbers, these patients are the source of a disproportionally high number of all ED visits, and use a significant proportion of healthcare resources. They place a heavy economic burden on hospital and healthcare system budgets overall. In order to improve the management of these patients, the University hospital of Lausanne, Switzerland implemented a case management intervention (CM) between May 2012 and July 2013. In this randomized controlled trial, 250 frequent ED users (visits>5 during previous 12 months) were allocated to either the CM group or the standard ED care (SC) group and followed up for 12 months. The first result of the CM was to reduce significantly the ED visits. The present study examined whether the CM intervention also reduced the costs generated by the ED frequent users not only from the hospital perspective, but also from the healthcare system perspective. Methods: Cost data were obtained from the hospital's analytical accounting system and from health insurances. Multivariate linear models including a fixed effect "group" and socio-demographic characteristics and health-related variables were run.
Resumo:
Introduction and Aims. About 20% of cannabis consumers report not smoking cigarettes. Studies that have compared cannabis and cigarette smokers, cigarette smokers, and cannabis users who do not smoke cigarettes (CNSs) have shown that CNSs have better outcomes across a range of indicators compared to the others. Therefore, we conducted a qualitative study to determine why CNSs did not smoke cigarettes and how they managed to resist cigarette smoking in order to better inform prevention efforts. Design and Methods. We conducted five focus groups (FG) with a total of 19 CNSs between ages 16 and 25. A narrative analysis of FGs was conducted using qualitative analysis software. Results. CNSs' non-smoking choice was rooted in a negative opinion of cigarettes and a harm-reduction strategy. They were unique cases within their peer groups, but there were no CNSs groups. All participants were confronted to the mulling paradox. Discussion and Conclusions. While tobacco-use prevention seems to have been successful, CNSs need to be informed of harmful consequences of chronic cannabis use. Given their habit of adding tobacco to cannabis, CNSs need to be alerted that they may be nicotine dependent even though they do not smoke tobacco on its own. This exploratory study brings essential insight concerning this specific population of cannabis consumers which future research should continue to develop.
Resumo:
O objetivo principal desse artigo é analisar o jogo social CityVille criado pela empresa Zynga, uma das últimas tendências do Facebook encontrado e disponível na web. A escolha do corpus game CityVille deve-se ao interesse de buscar compreender o por que esse jogo obteve tamanho sucesso e seja atualmente um dos jogos sociais/digitais de maior destaque e adeptos da rede social Facebook. Busca-se dessa maneira depreender de que maneira os jogos sociais tem evoluído e transformado as relações comunicacionais entre os usuários da rede. As redes sociais têm se tornando cada vez mais importantes e estão vinculadas a vida das pessoas. Com o desenvolvimento da linguagem digital, a forma com que as pessoas passaram a interagir se transforma, pois essas se comunicam através do computador em tempo real. O estudo busca fazer uma análise plural do jogo CityVille destacando distintos pontos de vista do jogo social. Propomos verificar as relações entre: o uso e os usuários, e a tecnologia e o conteúdo do jogo. Nas conclusões explicitaremos quais serão as direções possíveis do futuro dos jogos sociais da web.
Resumo:
Markkinasegmentointi nousi esiin ensi kerran jo 50-luvulla ja se on ollut siitä lähtien yksi markkinoinnin peruskäsitteistä. Suuri osa segmentointia käsittelevästä tutkimuksesta on kuitenkin keskittynyt kuluttajamarkkinoiden segmentointiin yritys- ja teollisuusmarkkinoiden segmentoinnin jäädessä vähemmälle huomiolle. Tämän tutkimuksen tavoitteena on luoda segmentointimalli teollismarkkinoille tietotekniikan tuotteiden ja palveluiden tarjoajan näkökulmasta. Tarkoituksena on selvittää mahdollistavatko case-yrityksen nykyiset asiakastietokannat tehokkaan segmentoinnin, selvittää sopivat segmentointikriteerit sekä arvioida tulisiko tietokantoja kehittää ja kuinka niitä tulisi kehittää tehokkaamman segmentoinnin mahdollistamiseksi. Tarkoitus on luoda yksi malli eri liiketoimintayksiköille yhteisesti. Näin ollen eri yksiköiden tavoitteet tulee ottaa huomioon eturistiriitojen välttämiseksi. Tutkimusmetodologia on tapaustutkimus. Lähteinä tutkimuksessa käytettiin sekundäärisiä lähteitä sekä primäärejä lähteitä kuten case-yrityksen omia tietokantoja sekä haastatteluita. Tutkimuksen lähtökohtana oli tutkimusongelma: Voiko tietokantoihin perustuvaa segmentointia käyttää kannattavaan asiakassuhdejohtamiseen PK-yritys sektorilla? Tavoitteena on luoda segmentointimalli, joka hyödyntää tietokannoissa olevia tietoja tinkimättä kuitenkaan tehokkaan ja kannattavan segmentoinnin ehdoista. Teoriaosa tutkii segmentointia yleensä painottuen kuitenkin teolliseen markkinasegmentointiin. Tarkoituksena on luoda selkeä kuva erilaisista lähestymistavoista aiheeseen ja syventää näkemystä tärkeimpien teorioiden osalta. Tietokantojen analysointi osoitti selviä puutteita asiakastiedoissa. Peruskontaktitiedot löytyvät mutta segmentointia varten tietoa on erittäin rajoitetusti. Tietojen saantia jälleenmyyjiltä ja tukkureilta tulisi parantaa loppuasiakastietojen saannin takia. Segmentointi nykyisten tietojen varassa perustuu lähinnä sekundäärisiin tietoihin kuten toimialaan ja yrityskokoon. Näitäkään tietoja ei ole saatavilla kaikkien tietokannassa olevien yritysten kohdalta.
Resumo:
Jatkuvasti lisääntyvä matkapuhelinten käyttäjien määrä, internetin kehittyminen yleiseksi tiedon ja viihteen lähteeksi on luonut tarpeen palvelulle liikkuvan työaseman liittämiseksi tietokoneverkkoihin. GPRS on uusi teknologia, joka tarjoaa olemassa olevia matka- puhelinverkkoja (esim. NMT ja GSM) nopeamman, tehokkaamman ja taloudellisemman liitynnän pakettidataverkkoihin, kuten internettiin ja intranetteihin. Tämän työn tavoitteena oli toteuttaa GPRS:n paketinohjausyksikön (Packet Control Unit, PCU) testauksessa tarvittavat viestintäajurit työasemaympristöön. Aidot matkapuhelinverkot ovat liian kalliita, eikä niistä saa tarvittavasti lokitulostuksia, jotta niitä voisi käyttää GPRS:n testauksessa ohjelmiston kehityksen alkuvaihessa. Tämän takia PCU-ohjelmiston testaus suoritetaan joustavammassa ja helpommin hallittavassa ympäristössä, joka ei aseta kovia reaaliaikavaatimuksia. Uusi toimintaympäristö ja yhteysmedia vaativat PCU:n ja muiden GPRS-verkon yksiköiden välisistä yhteyksistä huolehtivien ohjelman osien, viestintäajurien uuden toteutuksen. Tämän työn tuloksena syntyivät tarvittavien viestintäajurien työasemaversiot. Työssä tarkastellaan eri tiedonsiirtotapoja ja -protokollia testattavan ohjelmiston vaateiden, toteutetun ajurin ja testauksen kannalta. Työssä esitellään kunkin ajurin toteuttama rajapinta ja toteutuksen aste, eli mitkä toiminnot on toteutettu ja mitä on jätetty pois. Ajureiden rakenne ja toiminta selvitetään siltä osin, kuin se on oleellista ohjelman toiminnan kannalta.
Resumo:
Objective: We propose and validate a computer aided system to measure three different mandibular indexes: cortical width, panoramic mandibular index and, mandibular alveolar bone resorption index. Study Design: Repeatability and reproducibility of the measurements are analyzed and compared to the manual estimation of the same indexes. Results: The proposed computerized system exhibits superior repeatability and reproducibility rates compared to standard manual methods. Moreover, the time required to perform the measurements using the proposed method is negligible compared to perform the measurements manually. Conclusions: We have proposed a very user friendly computerized method to measure three different morphometric mandibular indexes. From the results we can conclude that the system provides a practical manner to perform these measurements. It does not require an expert examiner and does not take more than 16 seconds per analysis. Thus, it may be suitable to diagnose osteoporosis using dental panoramic radiographs
Resumo:
We present an algorithm for the computation of reducible invariant tori of discrete dynamical systems that is suitable for tori of dimensions larger than 1. It is based on a quadratically convergent scheme that approximates, at the same time, the Fourier series of the torus, its Floquet transformation, and its Floquet matrix. The Floquet matrix describes the linearization of the dynamics around the torus and, hence, its linear stability. The algorithm presents a high degree of parallelism, and the computational effort grows linearly with the number of Fourier modes needed to represent the solution. For these reasons it is a very good option to compute quasi-periodic solutions with several basic frequencies. The paper includes some examples (flows) to show the efficiency of the method in a parallel computer. In these flows we compute invariant tori of dimensions up to 5, by taking suitable sections.
Resumo:
Several classes of recreational and prescription drugs have been associated with an increased risk of cardiovascular disease and the occurrence of arrhythmias, which may be involved in sudden deaths in chronic users even at therapeutic doses. The study presented herein focuses on pathological changes involving the heart, which may be caused by selective serotonin reuptake inhibitor use and their possible role in the occurrence of sudden cardiac death. A total of 40 cases were included in the study and were divided evenly into 2 groups: 20 cases of patients treated with selective serotonin reuptake inhibitors and 20 cases of sudden deaths involving patients void of any drug treatment. The first group included 16 patients treated with citalopram and 4 with sertraline. Autopsies, histology, biochemistry, and toxicology were performed in all cases. Pathological changes in selective serotonin reuptake inhibitor users consisted of various degrees of interstitial and perivascular fibrosis as well as a small degree of perineural fibrosis within the myocardium of the left ventricle. Within the limits of the small number of investigated cases, the results of this study seem to confirm former observations on this topic, suggesting that selective serotonin reuptake inhibitors may play a potential, causative role in the pathogenesis of sudden deaths in chronic users even at therapeutic concentrations.
Resumo:
This thesis seeks to answer, if communication challenges in virtual teams can be overcome with the help of computer-mediated communication. Virtual teams are becoming more common work method in many global companies. In order for virtual teams to reach their maximum potential, effective asynchronous and synchronous methods for communication are needed. The thesis covers communication in virtual teams, as well as leadership and trust building in virtual environments with the help of CMC. First, the communication challenges in virtual teams are identified by using a framework of knowledge sharing barriers in virtual teams by Rosen et al. (2007) Secondly, the leadership and trust in virtual teams are defined in the context of CMC. The performance of virtual teams is evaluated in the case study by exploiting these three dimensions. With the help of a case study of two virtual teams, the practical issues related to selecting and implementing communication technologies as well as overcoming knowledge sharing barriers is being discussed. The case studies involve a complex inter-organisational setting, where four companies are working together in order to maintain a new IT system. The communication difficulties are related to inadequate communication technologies, lack of trust and the undefined relationships of the stakeholders and the team members. As a result, it is suggested that communication technologies are needed in order to improve the virtual team performance, but are not however solely capable of solving the communication challenges in virtual teams. In addition, suitable leadership and trust between team members are required in order to improve the knowledge sharing and communication in virtual teams.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.