958 resultados para Linked Data Open Data Linked Open Data RDF dataset Linked Data Browser Application-Oriented Indexes


Relevância:

100.00% 100.00%

Publicador:

Resumo:

La cmara Kinect est desarrollada por Prime Sense en colaboracin con Microsoft para la consola XBox, ofrece imgenes de profundidad gracias a un sensor infrarrojo. Este dispositivo tambin incluye una cmara RGB que ofrece imgenes a color adems de una serie de micrfonos colocados de tal manera que son capaces de saber de qu ngulo proviene el sonido. En un principio Kinect se cre para el ocio domstico pero su bajo precio (en comparacin con otras cmaras de iguales caractersticas) y la aceptacin por parte de desarrolladores han explotado sus posibilidades. El objetivo de este proyecto es, partiendo de estos datos, la obtencin de variables cinemticas tales como posicin, velocidad y aceleracin de determinados puntos de control del cuerpo de un individuo como pueden ser el cabeza, cuello, hombros, codos, muecas, caderas, rodillas y tobillos a partir de los cuales poder extraer patrones de movimiento. Para ello se necesita un middleware mediante el entorno de libre distribucin (GNU) multiplataforma. Como IDE se ha utilizado Processing, un entorno open source creado para proyectos de diseo. Adems se ha utilizado el contenedor SimpleOpenNI, desarrollado por estudiantes e investigadores que trabajan con Kinect. Esto ofrece la posibilidad de prescindir del SDK de Microsoft, el cual es propietario y obliga a utilizar su sistema operativo, Windows. Usando estas herramientas se consigue una solucin viable para varios sistemas operativos. Se han utilizado mtodos y facilidades que ofrece el lenguaje orientado a objetos Java (Proccesing hereda de este), y se ha planteado una solucin basada en un modelo cliente servidor que dota de escalabilidad al proyecto. El resultado del proyecto es til en aplicaciones para poblaciones con riesgo de exclusin (como es el espectro autista), en telediagnstico, y en general entornos donde se necesite estudiar hbitos y comportamientos a partir del movimiento humano. Con este proyecto se busca tener una continuidad mediante otras aplicaciones que analicen los datos ofrecidos. ABSTRACT. The Kinect camera is developed by PrimeSense in collaboration with Microsoft for the xBox console provides depth images thanks to an infrared sensor. This device also includes an RGB camera that provides color images in addition to a number of microphones placed such that they are able to know what angle the sound comes. Kinect initially created for domestic leisure but its low prices (compared to other cameras with the same characteristics) and acceptance by developers have exploited its possibilities. The objective of this project is based on this data to obtain kinematic variables such as position, velocity and acceleration of certain control points of the body of an individual from which to extract movement patterns. These points can be the head, neck, shoulders, elbows, wrists, hips, knees and ankles. This requires a middleware using freely distributed environment (GNU) platform. Processing has been used as a development environment, and open source environment created for design projects. Besides the container SimpleOpenNi has been used, it developed by students and researchers working with Kinect. This offers the possibility to dispense with the Microsoft SDK which owns and agrees to use its operating system, Windows. Using these tools will get a viable solution for multiple operating systems. We used methods and facilities of the Java object-oriented language (Processing inherits from this) and has proposed a solution based on a client-server model which provides scalability to the project. The result of the project is useful in applications to populations at risk of exclusion (such as autistic spectrum), in remote diagnostic, and in general environments that need study habits and behaviors from human motion. This project aims to have continuity using other applications to analyze the data provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Un Service Business Framework consiste en una serie de componentes interrelacionados que permiten la gestin de servicios de negocio a travs de su ciclo de vida, desde su creacin, descubrimiento y comparacin, hasta su monetizacin (incluyendo un posible reparto de beneficios). De esta manera, el denominado FIWARE Business Framework trata de permitir a los usuarios de la plataforma FIWARE mejorar sus productos con funcionalidades de bsqueda, describrimiento, comparacin, monetizacin y reparto de beneficios. Para lograr este objetivo, el Business Framework de FIWARE proporciona la especificacin abierta y las APIs de una serie de components (denominados \Generic Enablers" en terminologa FIWARE), junto con una implementacin de referencia de las mismas pueden ser facilmente integradas en los sitemas existentes para conseguir aplicaciones con valor a~nadido. Al comienzo de este trabajo de fin de master, el Business Framework de FIWARE no era lo suficientemente maduro como para cubrir los requisitos de sus usuarios, ya que ofreca modelos demasiado generales y dejaba algunas funcionalidades clave para ser implementadas por los usuarios. Para solucionar estos problemas, el principal objectivo desarrollado en el contexto de este trabajo de fin de master ha consistido en mejorar y evolucionar el Business Framework de FIWARE para dar respuesta a las demandas de sus usuarios. Para alcanzar el pricipal objetivo propuesto, el Business Framework de FIWARE ha sido evaluado usando la informacin proporcionada por los usuarios de la plataforma, principalmente PyMEs y start-ups que usan este framework en sus soluciones, con el objetivo de obtener una lista de requisitos y de dise~nar a partir de stos un roadmap de evolucin a 6 meses. Despus, los diferentes problemas identificados se han tratado uno por uno dando en cada caso una solucin capaz de cubrir los requisitos de los usuarios. Finalmente, se han evaluado los resultados obtenidos en el proyecto integrando el Business Framework desarrollado con un sistema existente para la gestin de datos de consusmo energtico, construyendo lo que se ha denominado Mercado de Datos de Consumo Energtico. Esto adems ha permitido demostrar la utilidad del framework propuesto para evolucionar una plataforma de datos abiertos bien conocida como es CKAN a un verdadero mercado de datos.---ABSTRACT---Service Business Frameworks consist on a number of interrelated components that support the management of business services across their whole lifecycle, from their creation, publication, discovery and comparison, to their monetization (possibly including revenue settlement and sharing). In this regard, the FIWARE Business Framework aims at allowing FIWARE users to enhance their solutions with search, discovery, comparison, monetization and revenue settlement and sharing features. To achieve this objective, the FIWARE Business Framework provides the open specification and APIs of a comprehensive set of components (called Generic Enablers in FIWARE terminology), along with a reference implementation of these APIs,, that can be easily integrated with existing systems in order to create value added applications. At the beginning of the current Master's Thesis, the FIWARE Business Framework was not mature enough to cover the requirements of the its users, since it provided too general models and leaved some key functionality to be implemented by those users. To deal with these issues, the main objective carried out in the context of this Master's Thesis have been enhancing and evolving the FIWARE Business Framework to accomplish with the demands of its users. For achieving the main objective of this Master's Thesis, the FWARE Business Framework has been evaluated using the feedback provided by FIWARE users, mainly SMEs and start-ups, actually using the framework in their solutions, in order to determine a list of requirements and to design a roadmap for the evolution and improvement of the existing framework in the next 6 months. Then, the diferent issues detected have been tackle one by one enhancing them, and trying to give a solution able to cover users requirements. Finally, the results of the project have been evaluated by integrating the evolved FIWARE Business Framework with an existing system in charge of the management of energy consumption data, building what has been called the Energy Consumption Data Market. This has also allowed demonstrating the usefulness of the proposed business framework to evolve CKAN, a renowned open data platform, into an actual, fully- edged data market.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parte de la investigacin biomdica actual se encuentra centrada en el anlisis de datos heterogneos. Estos datos pueden tener distinto origen, estructura, y semntica. Gran cantidad de datos de inters para los investigadores se encuentran en bases de datos pblicas, que recogen informacin de distintas fuentes y la ponen a disposicin de la comunidad de forma gratuita. Para homogeneizar estas fuentes de datos pblicas con otras de origen privado, existen diversas herramientas y tcnicas que permiten automatizar los procesos de homogeneizacin de datos heterogneos. El Grupo de Informtica Biomdica (GIB) [1] de la Universidad Politcnica de Madrid colabora en el proyecto europeo P-medicine [2], cuya finalidad reside en el desarrollo de una infraestructura que facilite la evolucin de los procedimientos mdicos actuales hacia la medicina personalizada. Una de las tareas enmarcadas en el proyecto P-medicine que tiene asignado el grupo consiste en elaborar herramientas que ayuden a usuarios en el proceso de integracin de datos contenidos en fuentes de informacin heterogneas. Algunas de estas fuentes de informacin son bases de datos pblicas de mbito biomdico contenidas en la plataforma NCBI [3] (National Center for Biotechnology Information). Una de las herramientas que el grupo desarrolla para integrar fuentes de datos es Ontology Annotator. En una de sus fases, la labor del usuario consiste en recuperar informacin de una base de datos pblica y seleccionar de forma manual los resultados relevantes. Para automatizar el proceso de bsqueda y seleccin de resultados relevantes, por un lado existe un gran inters en conseguir generar consultas que guen hacia resultados lo ms precisos y exactos como sea posible, por otro lado, existe un gran inters en extraer informacin relevante de elevadas cantidades de documentos, lo cual requiere de sistemas que analicen y ponderen los datos que caracterizan a los mismos. En el campo informtico de la inteligencia artificial, dentro de la rama de la recuperacin de la informacin, existen diversos estudios acerca de la expansin de consultas a partir de retroalimentacin relevante que podran ser de gran utilidad para dar solucin a la cuestin. Estos estudios se centran en tcnicas para reformular o expandir la consulta inicial utilizando como realimentacin los resultados que en una primera instancia fueron relevantes para el usuario, de forma que el nuevo conjunto de resultados tenga mayor proximidad con los que el usuario realmente desea. El objetivo de este trabajo de fin de grado consiste en el estudio, implementacin y experimentacin de mtodos que automaticen el proceso de extraccin de informacin trascendente de documentos, utilizndola para expandir o reformular consultas. De esta forma se pretende mejorar la precisin y el ranking de los resultados asociados. Dichos mtodos sern integrados en la herramienta Ontology Annotator y enfocados a la fuente de datos de PubMed [4].---ABSTRACT---Part of the current biomedical research is focused on the analysis of heterogeneous data. These data may have different origin, structure and semantics. A big quantity of interesting data is contained in public databases which gather information from different sources and make it open and free to be used by the community. In order to homogenize thise sources of public data with others which origin is private, there are some tools and techniques that allow automating the processes of integration heterogeneous data. The biomedical informatics group of the Universidad Politcnica de Madrid cooperates with the European project P-medicine which main purpose is to create an infrastructure and models to facilitate the transition from current medical practice to personalized medicine. One of the tasks of the project that the group is in charge of consists on the development of tools that will help users in the process of integrating data from diverse sources. Some of the sources are biomedical public data bases from the NCBI platform (National Center for Biotechnology Information). One of the tools in which the group is currently working on for the integration of data sources is called the Ontology Annotator. In this tool there is a phase in which the user has to retrieve information from a public data base and select the relevant data contained in it manually. For automating the process of searching and selecting data on the one hand, there is an interest in automatically generating queries that guide towards the more precise results as possible. On the other hand, there is an interest on retrieve relevant information from large quantities of documents. The solution requires systems that analyze and weigh the data allowing the localization of the relevant items. In the computer science field of the artificial intelligence, in the branch of information retrieval there are diverse studies about the query expansion from relevance feedback that could be used to solve the problem. The main purpose of this studies is to obtain a set of results that is the closer as possible to the information that the user really wants to retrieve. In order to reach this purpose different techniques are used to reformulate or expand the initial query using a feedback the results that where relevant for the user, with this method, the new set of results will have more proximity with the ones that the user really desires. The goal of this final dissertation project consists on the study, implementation and experimentation of methods that automate the process of extraction of relevant information from documents using this information to expand queries. This way, the precision and the ranking of the results associated will be improved. These methods will be integrated in the Ontology Annotator tool and will focus on the PubMed data source.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

No presente trabalho investigou-se a adoo por parte das instituies educacionais dos recursos e ferramentas disponveis nas mdias digitais sociais, que permitem a comunicao sncrona e assncrona, estimula a comunicao para maior interatividade, dialogicidade e colaborao entre os atores envolvidos no processo ensino-aprendizagem da educao a distncia. Para tanto, verificou-se como os alunos que estudam na EAD reagem s novas possibilidades oferecidas com o advento das novas tecnologias da informao e comunicao - NTICs. O estudo focou os alunos que estavam, no momento da pesquisa, cursando os segundos bimestres dos Cursos de Tecnlogos em: Marketing, Logstica, Recursos Humanos e Gesto Comercial em EAD, das universidades Anhanguera e Metodista, dos plos de So Caetano do Sul e Mau. Na seleo da amostra, optou-se pelo modelo no probabilstico do tipo intencional, adotando-se o critrio de que os alunos selecionados conheciam os recursos e as ferramentas utilizadas nas mdias digitais sociais das instituies. Como instrumento de medida, utilizou-se um questionrio tipo Likert composto por 7 dimenses e 32 assertivas, alm da tcnica de entrevista de grupo focal, totalizando 5 encontros. Resgatou-se historicamente alguns pontos sobre o movimento das novas tecnologias, sobretudo as digitais, bem como a educao na modalidade a distncia, desde o sculo passado at a atualidade. Alm disso, procurou-se entender o quanto as aes governamentais tm apoiado as transformaes tecnolgicas que impactam direta e indiretamente nos aspectos educacionais. Outro tema abordado para a discusso desse trabalho foi a Sociedade do Conhecimento, que tem papel de suma importncia para o desenvolvimento e avano das Novas Tecnologias da Comunicao e Informao. As discusses para as anlises dos resultados foram norteadas pela Teoria da Ao Comunicativa de Habermas que ajudou a compreender e indicar a importncia dos conceitos vistos sobre o mundo da Vida e o mundo do Sistema, dentro da realidade atual da EAD e das NTCs. Os dados apontaram que os alunos que estudam nessas duas Universidades so em sua maioria, casados, possuidores de filhos e pretendem com seus estudos dar um salto qualitativo em suas carreiras, acessam a internet de suas residncias, local de trabalho e nas universidades que estudam. So conhecedores e aproveitam as Mdias Digitais Sociais - MDS oferecidas por suas universidades e as utilizam para se comunicarem com seus colegas e seus professores, mas, interagem melhor com as ferramentas oferecidas pelo mercado, que esto disponveis na Internet e fazem uso de forma bastante significativa do mensageiro e-mail para a comunicao entre seus colegas. A pesquisa tambm assinalou que, o pblico pesquisado pertencente gerao de imigrantes digitais se adapta e utiliza relativamente bem as ferramentas oferecidas por ambas as Universidades. Entretanto, tambm indicou que, os recursos e as ferramentas digitais oferecidas pela instituio de ensino que podem facilitar o processo comunicacional dentro das instituies educacionais so conhecidos, mas, ainda se encontram aqum, frente s possibilidades oferecidas pelas novas tecnologias digitais utilizadas no mercado. Para a maioria, as MDS so de grande importncia e facilitam o processo comunicacional dentro da relao de ensino-aprendizagem, entretanto, apontam que o dilogo poderia ser mais dinmico se houvesse por parte dos professores ou tutores EAD maior interao on-line. A pesquisa assinalou que as mdias digitais sociais e suas ferramentas, oferecidas pelo mercado e que so utilizadas fora da rede do EAD, por serem mais interativas e intuitivas e estarem vinculadas a um ambiente mais descontrado, permitem maior colaborao e uma comunicao muito mais gil do que as vivenciadas na rede EAD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The flow behaviour of shallow gas-fluidised beds was studied. experimentally using a rotational viscometer, and an inclined open channel. Initially, tests were carried out with the viscometer in order to establish qualitative trends in the flow properties of a variety of materials over a wide range of fluidising conditions. Also, a technique was developed which enabled quantitative viscosity data to be extracted from the experimental results. The flow properties were found to be sensitive to the size, size-range and density of the fluidised material, the type of distributor used, and the moisture content of the fluidising gas. Tests in beds up to 120 mm deep showed that the fluidity of the bed improves with reduction in depth; and indicated a range of flow behaviour from shear-thinning to Newtonian, depending chiefly on fluidising velocity .. Later, an apparatus was built which provided for a steady, continuous flow of fluidised material down an inclined open channel of 3m length x 0.15m square, up to a mass flowrate of 10 kg/s (35 ton/hr). This facility has enabled data to be obtained that is of practical value in industrial applications; which is otherwise difficult in view of the present limited understanding of the true mechanism of fluidised flow. A correlation has been devised, based on analogy with laminar liquid flow, which describes the channel flow behaviour with reasonable accuracy over the whole range of shear-rates used. 1he channeI results indicated that at low fluidiising velocities the flow was adversely affected by settlement of a stagnant layer of particles on to the distributor, which gave rise to increased flow resistance. Conversely, at higher fluidising velocities the resistance at the distributor appeared to be less than at the walls. In view of this, and also because of the disparity in shear-rates between the two types of apparatus, it is not possible as yet to predict exactly the flow behaviour in an open channel from small-scale viscometer tests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data of amphibians, reptiles and birds surveyed from February 2016 to May 2016 in the UNESCO Sheka forest biosphere reserve are provided as an online open access data file.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Patient safety is concerned with preventable harm in healthcare, a subject that became a focus for study in the UK in the late 1990s. How to improve patient safety, presented both a practical and a research challenge in the early 2000s, leading to the eleven publications presented in this thesis. Research question The overarching research question was: What are the key organisational and systems factors that impact on patient safety, and how can these best be researched? Methods Research was conducted in over 40 acute care organisations in the UK and Europe between 2006 and 2013. The approaches included surveys, interviews, documentary analysis and non-participant observation. Two studies were longitudinal. Results The findings reveal the nature and extent of poor systems reliability and its effect on patient safety; the factors underpinning cases of patient harm; the cultural issues impacting on safety and quality; and the importance of a common language for quality and safety across an organisation. Across the publications, nine key organisational and systems factors emerged as important for patient safety improvement. These include leadership stability; data infrastructure; measurement capability; standardisation of clinical systems; and creating an open and fair collective culture where poor safety is challenged. Conclusions and contribution to knowledge The research presented in the publications has provided a more complete understanding of the organisation and systems factors underpinning safer healthcare. Lessons are drawn to inform methods for future research, including: how to define success in patient safety improvement studies; how to take into account external influences during longitudinal studies; and how to confirm meaning in multi-language research. Finally, recommendations for future research include assessing the support required to maintain a patient safety focus during periods of major change or austerity; the skills needed by healthcare leaders; and the implications of poor data infrastructure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of ICT infrastructures has facilitated the emergence of new paradigms for looking at society and the environment over the last few years. Participatory environmental sensing, i.e. directly involving citizens in environmental monitoring, is one example, which is hoped to encourage learning and enhance awareness of environmental issues. In this paper, an analysis of the behaviour of individuals involved in noise sensing is presented. Citizens have been involved in noise measuring activities through the WideNoise smartphone application. This application has been designed to record both objective (noise samples) and subjective (opinions, feelings) data. The application has been open to be used freely by anyone and has been widely employed worldwide. In addition, several test cases have been organised in European countries. Based on the information submitted by users, an analysis of emerging awareness and learning is performed. The data show that changes in the way the environment is perceived after repeated usage of the application do appear. Specifically, users learn how to recognise different noise levels they are exposed to. Additionally, the subjective data collected indicate an increased user involvement in time and a categorisation effect between pleasant and less pleasant environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of Ribosome Profiling (RiboSeq) has revolutionized functional genomics. RiboSeq is based on capturing and sequencing of the mRNA fragments enclosed within the translating ribosome and it thereby provides a snapshot of ribosome positions at the transcriptome wide level. Although the method is predominantly used for analysis of differential gene expression and discovery of novel translated ORFs, the RiboSeq data can also be a rich source of information about molecular mechanisms of polypeptide synthesis and translational control. This review will focus on how recent findings made with RiboSeq have revealed important details of the molecular mechanisms of translation in eukaryotes. These include mRNA translation sensitivity to drugs affecting translation initiation and elongation, the roles of upstream ORFs in response to stress, the dynamics of elongation and termination as well as details of intrinsic ribosome behavior on the mRNA after translation termination. As the RiboSeq method is still at a relatively early stage we will also discuss the implications of RiboSeq artifacts on data interpretation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a Focused Crawler in order to Get Semantic Web Resources (CSR). Structured data web are available in formats such as Extensible Markup Language (XML), Resource Description Framework (RDF) and Ontology Web Language (OWL) that can be used for processing. One of the main challenges for performing a manual search and download semantic web resources is that this task consumes a lot of time. Our research work propose a focused crawler which allow to download these resources automatically and store them on disk in order to have a collection that will be used for data processing. CRS consists of three layers: (a) The User Interface Layer, (b) The Focus Crawler Layer and (c) The Base Crawler Layer. CSR uses as a selection policie the Shark-Search method. CSR was conducted with two experiments. The first one starts on December 15 2012 at 7:11 am and ends on December 16 2012 at 4:01 were obtained 448,123,537 bytes of data. The CSR ends by itself after to analyze 80,4375 seeds with an unlimited depth. CSR got 16,576 semantic resources files where the 89 % was RDF, the 10 % was XML and the 1% was OWL. The second one was based on the Web Data Commons work of the Research Group Data and Web Science at the University of Mannheim and the Institute AIFB at the Karlsruhe Institute of Technology. This began at 4:46 am of June 2 2013 and 1:37 am June 9 2013. After 162.51 hours of execution the result was 285,279 semantic resources where predominated the XML resources with 99 % and OWL and RDF with 1 % each one.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A proportion of melanoma,prone individuals in both familial and non,familial contexts has been shown to carry inactivating mutations in either CDKN2A or, rarely, CDK4. CDKN2A is a complex locus that encodes two unrelated proteins from alternately spliced transcripts that are read in different frames. The alpha transcript (exons 1a, 2, and 3) produces the p16INK4A cyclin-dependent kinase inhibitor, while the beta transcript (exons 1beta and 2) is translated as p14ARF, a stabilizing factor of p53 levels through binding to MDM2. Mutations in exon 2 can impair both polypeptides and insertions and deletions in exons 1alpha, 1beta, and 2, which can theoretically generate p16INK4A,p14ARF fusion proteins. No online database currently takes into account all the consequences of these genotypes, a situation compounded by some problematic previous annotations of CDKN2A related sequences and descriptions of their mutations. As an initiative of the international Melanoma Genetics Consortium, we have therefore established a database of germline variants observed in all loci implicated in familial melanoma susceptibility. Such a comprehensive, publicly accessible database is an essential foundation for research on melanoma susceptibility and its clinical application. Our database serves two types of data as defined by HUGO. The core dataset includes the nucleotide variants on the genomic and transcript levels, amino acid variants, and citation. The ancillary dataset includes keyword description of events at the transcription and translation levels and epidemiological data. The application that handles users' queries was designed in the model,view. controller architecture and was implemented in Java. The object-relational database schema was deduced using functional dependency analysis. We hereby present our first functional prototype of eMelanoBase. The service is accessible via the URL www.wmi.usyd.e, du.au:8080/melanoma.html.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the characterization of high voltage (HV) electric power consumers based on a data clustering approach. The typical load profiles (TLP) are obtained selecting the best partition of a power consumption database among a pool of data partitions produced by several clustering algorithms. The choice of the best partition is supported using several cluster validity indices. The proposed data-mining (DM) based methodology, that includes all steps presented in the process of knowledge discovery in databases (KDD), presents an automatic data treatment application in order to preprocess the initial database in an automatic way, allowing time saving and better accuracy during this phase. These methods are intended to be used in a smart grid environment to extract useful knowledge about customers consumption behavior. To validate our approach, a case study with a real database of 185 HV consumers was used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ABSTRACT - It is the purpose of the present thesis to emphasize, through a series of examples, the need and value of appropriate pre-analysis of the impact of health care regulation. Specifically, the thesis presents three papers on the theme of regulation in different aspects of health care provision and financing. The first two consist of economic analyses of the impact of health care regulation and the third comprises the creation of an instrument for supporting economic analysis of health care regulation, namely in the field of evaluation of health care programs. The first paper develops a model of health plan competition and pricing in order to understand the dynamics of health plan entry and exit in the presence of switching costs and alternative health premium payment systems. We build an explicit model of death spirals, in which profitmaximizing competing health plans find it optimal to adopt a pattern of increasing relative prices culminating in health plan exit. We find the steady-state numerical solution for the price sequence and the plans optimal length of life through simulation and do some comparative statics. This allows us to show that using risk adjusted premiums and imposing price floors are effective at reducing death spirals and switching costs, while having employees pay a fixed share of the premium enhances death spirals and increases switching costs. Price regulation of pharmaceuticals is one of the cost control measures adopted by the Portuguese government, as in many European countries. When such regulation decreases the products real price over time, it may create an incentive for product turnover. Using panel data for the period of 1997 through 2003 on drug packages sold in Portuguese pharmacies, the second paper addresses the question of whether price control policies create an incentive for product withdrawal. Our work builds the product survival literature by accounting for unobservable product characteristics and heterogeneity among consumers when constructing quality, price control and competition indexes. These indexes are then used as covariates in a Cox proportional hazard model. We find that, indeed, price control measures increase the probability of exit, and that such effect is not verified in OTC market where no such price regulation measures exist. We also find quality to have a significant positive impact on product survival. In the third paper, we develop a microsimulation discrete events model (MSDEM) for costeffectiveness analysis of Human Immunodeficiency Virus treatment, simulating individual paths from antiretroviral therapy (ART) initiation to death. Four driving forces determine the course of events: CD4+ cell count, viral load resistance and adherence. A novel feature of the model with respect to the previous MSDEMs is that distributions of time to event depend on individuals characteristics and past history. Time to event was modeled using parametric survival analysis. Events modeled include: viral suppression, regimen switch due virological failure, regimen switch due to other reasons, resistance development, hospitalization, AIDS events, and death. Disease progression is structured according to therapy lines and the model is parameterized with cohort Portuguese observational data. An application of the model is presented comparing the cost-effectiveness ART initiation with two nucleoside analogue reverse transcriptase inhibitors (NRTI) plus one non-nucleoside reverse transcriptase inhibitor(NNRTI) to two NRTI plus boosted protease inhibitor (PI/r) in HIV- 1 infected individuals. We find 2NRTI+NNRTI to be a dominant strategy. Results predicted by the model reproduce those of the data used for parameterization and are in line with those published in the literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study analyses financial data using the result characterization of a self-organized neural network model. The goal was prototyping a tool that may help an economist or a market analyst to analyse stock market series. To reach this goal, the tool shows economic dependencies and statistics measures over stock market series. The neural network SOM (self-organizing maps) model was used to ex-tract behavioural patterns of the data analysed. Based on this model, it was de-veloped an application to analyse financial data. This application uses a portfo-lio of correlated markets or inverse-correlated markets as input. After the anal-ysis with SOM, the result is represented by micro clusters that are organized by its behaviour tendency. During the study appeared the need of a better analysis for SOM algo-rithm results. This problem was solved with a cluster solution technique, which groups the micro clusters from SOM U-Matrix analyses. The study showed that the correlation and inverse-correlation markets projects multiple clusters of data. These clusters represent multiple trend states that may be useful for technical professionals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En la presente memoria se detallan con exactitud los pasos y procesos realizados para construir una aplicacin que posibilite el cruce de datos genticos a partir de informacin contenida en bases de datos remotas. Desarrolla un estudio en profundidad del contenido y estructura de las bases de datos remotas del NCBI y del KEGG, documentando una minera de datos con el objetivo de extraer de ellas la informacin necesaria para desarrollar la aplicacin de cruce de datos genticos. Finalmente se establecen los programas, scripts y entornos grficos que han sido implementados para la construccin y posterior puesta en marcha de la aplicacin que proporciona la funcionalidad de cruce de la que es objeto este proyecto fin de carrera.