944 resultados para Information users
Resumo:
Article describing the current situation of the Internet, the use of Internet by sport institutions, and the relationship between the Internet and the Olympic Games, and the Olympic Movement. This paper was presented at the International Symposium on Television in the Olympic Games held in Lausanne in October 1998.
Resumo:
Cette thèse examine la circulation et l'intégration des informations scientifiques dans la pensée quotidienne d'après la théorie des représentations sociales (TRS). En tant qu'alternative aux approches traditionnelles de la communication de la science, les transformations survenant entre le discours scientifique et le discours de sens commun sont considérées comme adaptatives. Deux études sur la circulation des informations dans les media (études 1 et 2) montrent des variations dans les thèmes de discours exposés aux profanes, et parmi les discours de ceux-ci, en fonction de différentes sources. Ensuite, le processus d'ancrage dans le positionnement préalable envers la science est étudié, pour l'explication qu'il fournit de la réception et de la transmission d'informations scientifiques dans le sens commun. Les effets d'ancrage dans les attitudes et croyances préexistants sont reportés dans différents contextes de circulation des informations scientifiques (études 3 à 7), incluant des études de type corrélationnel, experimental et de terrain. Globalement, cette thèse procure des arguments en faveur de la pertinence de la TRS pour la recherche sur la communication de la science, et suggère des développements théoriques et méthodologiques pour ces deux domaines de recherche. Drawing on the social representations theory (SRT), this thesis examines the circulation and integration of scientific information into everyday thinking. As an alternative to the traditional approaches of science communication, it considers transformations between scientific and common-sense discourses as adaptive. Two studies, focused on the spreading of information into the media (Studies 1 and 2), show variations in the themes of discourses introduced to laypersons and in the themes among laypersons' discourses, according to different sources. Anchoring in prior positioning toward science is then studied for the explanation it provides on the reception and transmission of scientific information into common sense. Anchoring effects in prior attitudes and beliefs are reported in different contexts of circulation of scientific information (Studies 3 to 7) by using results from correlational, field, and experimental studies. Overall, this thesis provides arguments for the relevance of SRT in science communication research and suggests theoretical and methodological developments for both domains of research.
Resumo:
Una de les opcions que es contemplen per transmetre continguts multimèdia i proporcionar accés a Internet a grups de usuaris mòbils és fer servir satèl·lits. Les condiciones de propagació del canal mòbil impliquen que d'una manera o altra haurem de garantir la qualitat de servei. Això té fins i tot més importància si tenim en compte que, en el cas d'accés a Internet, no es té la capacitat d'assumir cert percentatge de pèrdua de dades que tenim, per exemple, en la transmissió de so o vídeo (rebaixant la qualitat). Entre les principals alternatives per a aquesta classe d’entorns es troba la inclusió de codificacions a nivell de paquet. El funcionament d'aquesta tècnica es basa en incloure a la transmissió paquets redundants, obtinguts mitjançant un determinat algoritme. El receptor podrà recuperar la informació original que es volia enviar, sempre que hagi rebut una certa quantitat de paquets, similar a la quantitat de paquets originals. A aquest mecanisme se'l coneix com Forward Error Correction (FEC) a nivell de paquet. En aquesta memòria es valoren breument les alternatives existents i s'expliquen algunes de les codificacions per a FEC més importants. A continuació es realitza un estudi compartiu d’algunes d'elles: les variants de LDPC (Low Density Parity Check) conegudes com LDGM (Low Density Generator Matrix), i la codificació Raptor
Resumo:
Aquest article explora el disseny i l'ús dels portals en un entorn bibliotecari. Tracta les motivacions per construir portals, així com l'estructura i la tipologia d'aquests. A més, examina l'entorn de l'usuari en què es desenvolupen aquests portals. També argumenta que aporten serveis útils d'integració i presentació, però que s'han de considerar com a component d'un conjunt de serveis més ampli que la biblioteca està construint per tal d'introduir aquests recursos útils als usuaris. Així mateix, considera breument els serveis que els portals ofereixen: consulta distribuïda o metacerca, personalització, demandes, resolució OpenURL, avisos, etc. També considera l'emergent necessitat de serveis de directori o de registre per a coses com la descripció de col·leccions i serveis, dades de drets i polítiques, etc. Tracta l'impacte dels serveis web i el canvi en els models d'investigació i aprenentatge en relació al subministrament i ús d'informació en xarxa. Finalment, considera els serveis bibliotecaris com a part d'un entorn de sistemes, cada vegada més ric, que inclou els sistemes de gestió d'aprenentatge i de programari educatiu, portals de campus, serveis compartits com l'autenticació, i altres sistemes i serveis.
Resumo:
This paper provides evidence on the sources of co-movement in monthly US and UK stock price movements by investigating the role of macroeconomic and financial variables in a bivariate system with time-varying conditional correlations. Crosscountry communality in response is uncovered, with changes in the US Federal Funds rate, UK bond yields and oil prices having similar negative effects in both markets. Other variables also play a role, especially for the UK market. These effects do not, however, explain the marked increase in cross-market correlations observed from around 2000, which we attribute to time variation in the correlations of shocks to these markets. A regime-switching smooth transition model captures this time variation well and shows the correlations increase dramatically around 1999-2000. JEL classifications: C32, C51, G15 Keywords: international stock returns, DCC-GARCH model, smooth transition conditional correlation GARCH model, model evaluation.
Resumo:
Informe de investigación realizado a partir de una estancia en el Équipe de Recherche en Syntaxe et Sémantique de la Université de Toulouse-Le Mirail, Francia, entre julio y setiembre de 2006. En la actualidad existen diversos diccionarios de siglas en línea. Entre ellos sobresalen Acronym Finder, Abbreviations.com y Acronyma; todos ellos dedicados mayoritariamente a las siglas inglesas. Al igual que los diccionarios en papel, este tipo de diccionarios presenta problemas de desactualización por la gran cantidad de siglas que se crean a diario. Por ejemplo, en 2001, un estudio de Pustejovsky et al. mostraba que en los abstracts de Medline aparecían mensualmente cerca de 12.000 nuevas siglas. El mecanismo de actualización empleado por estos recursos es la remisión de nuevas siglas por parte de los usuarios. Sin embargo, esta técnica tiene la desventaja de que la edición de la información es muy lenta y costosa. Un ejemplo de ello es el caso de Abbreviations.com que en octubre de 2006 tenía alrededor de 100.000 siglas pendientes de edición e incorporación definitiva. Como solución a este tipo de problema, se plantea el diseño de sistemas de detección y extracción automática de siglas a partir de corpus. El proceso de detección comporta dos pasos; el primero, consiste en la identificación de las siglas dentro de un corpus y, el segundo, la desambiguación, es decir, la selección de la forma desarrollada apropiada de una sigla en un contexto dado. En la actualidad, los sistemas de detección de siglas emplean métodos basados en patrones, estadística, aprendizaje máquina, o combinaciones de ellos. En este estudio se analizan los principales sistemas de detección y desambiguación de siglas y los métodos que emplean. Cada uno se evalúa desde el punto de vista del rendimiento, medido en términos de precisión (porcentaje de siglas correctas con respecto al número total de siglas extraídas por el sistema) y exhaustividad (porcentaje de siglas correctas identificadas por el sistema con respecto al número total de siglas existente en el corpus). Como resultado, se presentan los criterios para el diseño de un futuro sistema de detección de siglas en español.
Resumo:
OBJECTIVES: To estimate the prevalence of youth who use cannabis but have never been tobacco smokers and to assess the characteristics that differentiate them from those using both substances or neither substance. DESIGN: School survey. SETTING: Postmandatory schools. PARTICIPANTS: A total of 5263 students (2439 females) aged 16 to 20 years divided into cannabis-only smokers (n = 455), cannabis and tobacco smokers (n = 1703), and abstainers (n = 3105). OUTCOME MEASURES: Regular tobacco and cannabis use; and personal, family, academic, and substance use characteristics. RESULTS: Compared with those using both substances, cannabis-only youth were younger (adjusted odds ratio [AOR], 0.82) and more likely to be male (AOR, 2.19), to play sports (AOR, 1.64), to live with both parents (AOR, 1.33), to be students (AOR, 2.56), and to have good grades (AOR, 1.57) and less likely to have been drunk (AOR, 0.55), to have started using cannabis before the age of 15 years (AOR, 0.71), to have used cannabis more than once or twice in the previous month (AOR, 0.64), and to perceive their pubertal timing as early (AOR, 0.59). Compared with abstainers, they were more likely to be male (AOR, 2.10), to have a good relationship with friends (AOR, 1.62), to be sensation seeking (AOR, 1.32), and to practice sports (AOR, 1.37) and less likely to have a good relationship with their parents (AOR, 0.59). They were more likely to attend high school (AOR, 1.43), to skip class (AOR, 2.28), and to have been drunk (AOR, 2.54) or to have used illicit drugs (AOR, 2.28). CONCLUSIONS: Cannabis-only adolescents show better functioning than those who also use tobacco. Compared with abstainers, they are more socially driven and do not seem to have psychosocial problems at a higher rate.
Resumo:
This paper presents a semisupervised support vector machine (SVM) that integrates the information of both labeled and unlabeled pixels efficiently. Method's performance is illustrated in the relevant problem of very high resolution image classification of urban areas. The SVM is trained with the linear combination of two kernels: a base kernel working only with labeled examples is deformed by a likelihood kernel encoding similarities between labeled and unlabeled examples. Results obtained on very high resolution (VHR) multispectral and hyperspectral images show the relevance of the method in the context of urban image classification. Also, its simplicity and the few parameters involved make the method versatile and workable by unexperienced users.
Resumo:
One of the key problems in conducting surveys is convincing people to participate.¦However, it is often difficult or impossible to determine why people refuse. Panel surveys¦provide information from previous waves that can offer valuable clues as to why people¦refuse to participate. If we are able to anticipate the reasons for refusal, then we¦may be able to take appropriate measures to encourage potential respondents to participate¦in the survey. For example, special training could be provided for interviewers¦on how to convince potential participants to participate.¦This study examines different influences, as determined from the previous wave,¦on refusal reasons that were given by the respondents in the subsequent wave of the¦telephone Swiss Household Panel. These influences include socio-demography, social¦inclusion, answer quality, and interviewer assessment of question understanding and¦of future participation. Generally, coefficients are similar across reasons, and¦between-respondents effects rather than within-respondents effects are significant.¦While 'No interest' reasons are easier to predict, the other reasons are more situational. Survey-specific issues are able to distinguish¦different reasons to some extent.
Resumo:
Therapeutic drug monitoring (TDM) aims to optimize treatments by individualizing dosage regimens based on the measurement of blood concentrations. Dosage individualization to maintain concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculations currently represent the gold standard TDM approach but require computation assistance. In recent decades computer programs have been developed to assist clinicians in this assignment. The aim of this survey was to assess and compare computer tools designed to support TDM clinical activities. The literature and the Internet were searched to identify software. All programs were tested on personal computers. Each program was scored against a standardized grid covering pharmacokinetic relevance, user friendliness, computing aspects, interfacing and storage. A weighting factor was applied to each criterion of the grid to account for its relative importance. To assess the robustness of the software, six representative clinical vignettes were processed through each of them. Altogether, 12 software tools were identified, tested and ranked, representing a comprehensive review of the available software. Numbers of drugs handled by the software vary widely (from two to 180), and eight programs offer users the possibility of adding new drug models based on population pharmacokinetic analyses. Bayesian computation to predict dosage adaptation from blood concentration (a posteriori adjustment) is performed by ten tools, while nine are also able to propose a priori dosage regimens, based only on individual patient covariates such as age, sex and bodyweight. Among those applying Bayesian calculation, MM-USC*PACK© uses the non-parametric approach. The top two programs emerging from this benchmark were MwPharm© and TCIWorks. Most other programs evaluated had good potential while being less sophisticated or less user friendly. Programs vary in complexity and might not fit all healthcare settings. Each software tool must therefore be regarded with respect to the individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Computer-assisted TDM is gaining growing interest and should further improve, especially in terms of information system interfacing, user friendliness, data storage capability and report generation.
Resumo:
Cada vez es más habitual encontrar tanto el software como el hardware común de las empresas distribuido y gestionado en diferentes servidores que se encargan de servir al usuario aquello que necesita sólo cuando éste lo pide. Este sistema de distribución de la información se llama centralización. Este sistema de distribución requiere un mantenimiento constante para así poder atender todas las demandas de los usuarios. El mantenimiento se convierte, gracias a la centralización en algo relativamente sencillo puesto que sólo es en el servidor donde se tienen que realizar los cambios, actualizaciones o instalación de nuevo software. Es importante entonces comprobar que estas nuevas actualizaciones del servidor responderán correctamente cuando los usuarios las requieran remotamente. En este proyecto nos hemos encargado de analizar cómo se realizan las comprobaciones necesarias para asegurar el correcto funcionamiento de los servidores remotos considerando tanto el entorno en el que se realizan como las herramientas necesarias para llevarlo a cabo. Para completar la información nos hemos centrado en un ejemplo particular de test de carga.
Resumo:
Abstract Despite the popularity of auction theoretical thinking, it appears that no one has presented an elementary equilibrium analysis of the first-price sealed-bid auction mechanism under complete information. This paper aims to remedy that omission. We show that the existence of pure strategy undominated Nash equilibria requires that the bidding space is not "too divisible" (that is, a continuum). In fact, when bids must form part of a finite grid there always exists a "high price equilibrium". However, there might also be "low price equilibria" and when the bidding space is very restrictive the revenue obtained in these "low price equilibria" might be very low. We discuss the properties of the equilibria and an application of auction theoretical thinking in which "low price equilibria" may be relevant. Keywords: First-price auctions, undominated Nash equilibria. JEL Classification Numbers: C72 (Noncooperative Games), D44 (Auctions).