874 resultados para Distributed data access


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les travailleuses du sexe constituent un groupe hétérogène qui cumule les facteurs de vulnérabilité, comme l'instabilité géographique, la migration forcée, les addictions et la précarité du permis de séjour. Leur accès aux soins dépend notamment des lois régissant le "marché du sexe" et de la politique migratoire du pays d'accueil. Dans cet article, nous passons en revue diverses stratégies sanitaires européennes destinées à ce groupe vulnérable et présentons les résultats préliminaires d'une étude pilote réalisée auprès de 50 travailleuses du sexe pratiquant dans les rues de Lausanne. Les résultats sont préoccupants : 56% n'ont pas d'assurance maladie, 96% sont migrantes et 66% sans permis de séjour. Ces résultats préliminaires devraient sensibiliser les décideurs politiques à améliorer l'accès aux soins des travailleuses du sexe. [Abstract] Sex workers constitute a heterogeneous group possessing a combination of vulnerability factors such as geographical instability, forced migration, substance addiction and lack of legal residence permit. Access to healthcare for sex workers depends on the laws governing the sex market and on migration policies in force in the host country. In this article, we review different European health strategies established for sex workers, and present preliminary results of a pilot study conducted among 50 sex workers working on the streets in Lausanne. The results are worrying: 56% have no health insurance, 96% are migrants and 66% hold no legal residence permit. These data should motivate public health departments towards improving access to healthcare for this vulnerable population.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present paper advocates for the creation of a federated, hybrid database in the cloud, integrating law data from all available public sources in one single open access system - adding, in the process, relevant meta-data to the indexed documents, including the identification of social and semantic entities and the relationships between them, using linked open data techniques and standards such as RDF. Examples of potential benefits and applications of this approach are also provided, including, among others, experiences from of our previous research, in which data integration, graph databases and social and semantic networks analysis were used to identify power relations, litigation dynamics and cross-references patterns both intra and inter-institutionally, covering most of the World international economic courts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The reason for this study is to propose a new quantitative approach on how to assess the quality of Open Access University Institutional Repositories. The results of this new approach are tested in the Spanish University Repositories. The assessment method is based in a binary codification of a proposal of features that objectively describes the repositories. The purposes of this method are assessing the quality and an almost automatically system for updating the data of the characteristics. First of all a database was created with the 38 Spanish institutional repositories. The variables of analysis are presented and explained either if they are coming from bibliography or are a set of new variables. Among the characteristics analyzed are the features of the software, the services of the repository, the features of the information system, the Internet visibility and the licenses of use. Results from Spanish universities ARE provided as a practical example of the assessment and for having a picture of the state of the development of the open access movement in Spain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cognitive radio networks (CRN) sense spectrum occupancy and manage themselves to operate in unused bands without disturbing licensed users. The detection capability of a radio system can be enhanced if the sensing process is performed jointly by a group of nodes so that the effects of wireless fading and shadowing can be minimized. However, taking a collaborative approach poses new security threats to the system as nodes can report false sensing data to force a wrong decision. Providing security to the sensing process is also complex, as it usually involves introducing limitations to the CRN applications. The most common limitation is the need for a static trusted node that is able to authenticate and merge the reports of all CRN nodes. This paper overcomes this limitation by presenting a protocol that is suitable for fully distributed scenarios, where there is no static trusted node.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using data from the Public Health Service, we studied the demographic and clinical characteristics of 1,782 patients enrolled in methadone maintenance treatment (MMT) during 2001 in the Swiss Canton of Vaud, comparing our findings with the results of a previous study from 1976 to 1986. In 2001, most patients (76.9%) were treated in general practice. Mortality is low in this MMT population (1%/year). While patient age and sex profiles were similar to those found in the earlier study, we did observe a substantial increase in the number of patients and the number of practitioners treating MMT patients, probably reflecting the low-threshold governmental policies and the creation of specialized centers. In conclusion, easier access to MMT enhances the number of patients, but new concerns about the quality of management emerge: benzodiazepine as a concomitant prescription; low rates of screening for hepatitis B, C and HIV, and social and psychiatric preoccupations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the increasing availability of various 'omics data, high-quality orthology assignment is crucial for evolutionary and functional genomics studies. We here present the fourth version of the eggNOG database (available at http://eggnog.embl.de) that derives nonsupervised orthologous groups (NOGs) from complete genomes, and then applies a comprehensive characterization and analysis pipeline to the resulting gene families. Compared with the previous version, we have more than tripled the underlying species set to cover 3686 organisms, keeping track with genome project completions while prioritizing the inclusion of high-quality genomes to minimize error propagation from incomplete proteome sets. Major technological advances include (i) a robust and scalable procedure for the identification and inclusion of high-quality genomes, (ii) provision of orthologous groups for 107 different taxonomic levels compared with 41 in eggNOGv3, (iii) identification and annotation of particularly closely related orthologous groups, facilitating analysis of related gene families, (iv) improvements of the clustering and functional annotation approach, (v) adoption of a revised tree building procedure based on the multiple alignments generated during the process and (vi) implementation of quality control procedures throughout the entire pipeline. As in previous versions, eggNOGv4 provides multiple sequence alignments and maximum-likelihood trees, as well as broad functional annotation. Users can access the complete database of orthologous groups via a web interface, as well as through bulk download.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Classical disease phenotypes are mainly based on descriptions of symptoms and the hypothesis that a given pattern of symptoms provides a diagnosis. With refined technologies there is growing evidence that disease expression in patients is much more diverse and subtypes need to be defined to allow a better targeted treatment. One of the aims of the Mechanisms of the Development of Allergy Project (MeDALL,FP7) is to re-define the classical phenotypes of IgE-associated allergic diseases from birth to adolescence, by consensus among experts using a systematic review of the literature and identify possible gaps in research for new disease markers. This paper describes the methods to be used for the systematic review of the classical IgE-associated phenotypes applicable in general to other systematic reviews also addressing phenotype definitions based on evidence. METHODS/DESIGN: Eligible papers were identified by PubMed search (complete database through April 2011). This search yielded 12,043 citations. The review includes intervention studies (randomized and clinical controlled trials) and observational studies (cohort studies including birth cohorts, case-control studies) as well as case series. Systematic and non-systematic reviews, guidelines, position papers and editorials are not excluded but dealt with separately. Two independent reviewers in parallel conducted consecutive title and abstract filtering scans. For publications where title and abstract fulfilled the inclusion criteria the full text was assessed. In the final step, two independent reviewers abstracted data using a pre-designed data extraction form with disagreements resolved by discussion among investigators. DISCUSSION: The systematic review protocol described here allows to generate broad,multi-phenotype reviews and consensus phenotype definitions. The in-depth analysis of the existing literature on the classification of IgE-associated allergic diseases through such a systematic review will 1) provide relevant information on the current epidemiologic definitions of allergic diseases, 2) address heterogeneity and interrelationships and 3) identify gaps in knowledge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The nature of client-server architecture implies that some modules are delivered to customers. These publicly distributed commercial software components are under risk, because users (and simultaneously potential malefactors) have physical access to some components of the distributed system. The problem becomes even worse if interpreted programming languages are used for creation of client side modules. The language Java, which was designed to be compiled into platform independent byte-code is not an exception and runs the additional risk. Along with advantages like verifying the code before execution (to ensure that program does not produce some illegal operations)Java has some disadvantages. On a stage of byte-code a java program still contains comments, line numbers and some other instructions, which can be used for reverse-engineering. This Master's thesis focuses on protection of Java code based client-server applications. I present a mixture of methods to protect software from tortious acts. Then I shall realize all the theoretical assumptions in a practice and examine their efficiency in examples of Java code. One of the criteria's to evaluate the system is that my product is used for specialized area of interactive television.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé: L'automatisation du séquençage et de l'annotation des génomes, ainsi que l'application à large échelle de méthodes de mesure de l'expression génique, génèrent une quantité phénoménale de données pour des organismes modèles tels que l'homme ou la souris. Dans ce déluge de données, il devient très difficile d'obtenir des informations spécifiques à un organisme ou à un gène, et une telle recherche aboutit fréquemment à des réponses fragmentées, voir incomplètes. La création d'une base de données capable de gérer et d'intégrer aussi bien les données génomiques que les données transcriptomiques peut grandement améliorer la vitesse de recherche ainsi que la qualité des résultats obtenus, en permettant une comparaison directe de mesures d'expression des gènes provenant d'expériences réalisées grâce à des techniques différentes. L'objectif principal de ce projet, appelé CleanEx, est de fournir un accès direct aux données d'expression publiques par le biais de noms de gènes officiels, et de représenter des données d'expression produites selon des protocoles différents de manière à faciliter une analyse générale et une comparaison entre plusieurs jeux de données. Une mise à jour cohérente et régulière de la nomenclature des gènes est assurée en associant chaque expérience d'expression de gène à un identificateur permanent de la séquence-cible, donnant une description physique de la population d'ARN visée par l'expérience. Ces identificateurs sont ensuite associés à intervalles réguliers aux catalogues, en constante évolution, des gènes d'organismes modèles. Cette procédure automatique de traçage se fonde en partie sur des ressources externes d'information génomique, telles que UniGene et RefSeq. La partie centrale de CleanEx consiste en un index de gènes établi de manière hebdomadaire et qui contient les liens à toutes les données publiques d'expression déjà incorporées au système. En outre, la base de données des séquences-cible fournit un lien sur le gène correspondant ainsi qu'un contrôle de qualité de ce lien pour différents types de ressources expérimentales, telles que des clones ou des sondes Affymetrix. Le système de recherche en ligne de CleanEx offre un accès aux entrées individuelles ainsi qu'à des outils d'analyse croisée de jeux de donnnées. Ces outils se sont avérés très efficaces dans le cadre de la comparaison de l'expression de gènes, ainsi que, dans une certaine mesure, dans la détection d'une variation de cette expression liée au phénomène d'épissage alternatif. Les fichiers et les outils de CleanEx sont accessibles en ligne (http://www.cleanex.isb-sib.ch/). Abstract: The automatic genome sequencing and annotation, as well as the large-scale gene expression measurements methods, generate a massive amount of data for model organisms. Searching for genespecific or organism-specific information througout all the different databases has become a very difficult task, and often results in fragmented and unrelated answers. The generation of a database which will federate and integrate genomic and transcriptomic data together will greatly improve the search speed as well as the quality of the results by allowing a direct comparison of expression results obtained by different techniques. The main goal of this project, called the CleanEx database, is thus to provide access to public gene expression data via unique gene names and to represent heterogeneous expression data produced by different technologies in a way that facilitates joint analysis and crossdataset comparisons. A consistent and uptodate gene nomenclature is achieved by associating each single gene expression experiment with a permanent target identifier consisting of a physical description of the targeted RNA population or the hybridization reagent used. These targets are then mapped at regular intervals to the growing and evolving catalogues of genes from model organisms, such as human and mouse. The completely automatic mapping procedure relies partly on external genome information resources such as UniGene and RefSeq. The central part of CleanEx is a weekly built gene index containing crossreferences to all public expression data already incorporated into the system. In addition, the expression target database of CleanEx provides gene mapping and quality control information for various types of experimental resources, such as cDNA clones or Affymetrix probe sets. The Affymetrix mapping files are accessible as text files, for further use in external applications, and as individual entries, via the webbased interfaces . The CleanEx webbased query interfaces offer access to individual entries via text string searches or quantitative expression criteria, as well as crossdataset analysis tools, and crosschip gene comparison. These tools have proven to be very efficient in expression data comparison and even, to a certain extent, in detection of differentially expressed splice variants. The CleanEx flat files and tools are available online at: http://www.cleanex.isbsib. ch/.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research of power-line communications has been concentrated on home automation, broadband indoor communications and broadband data transfer in a low voltage distribution network between home andtransformer station. There has not been carried out much research work that is focused on the high frequency characteristics of industrial low voltage distribution networks. The industrial low voltage distribution network may be utilised as a communication channel to data transfer required by the on-line condition monitoring of electric motors. The advantage of using power-line data transfer is that it does not require the installing of new cables. In the first part of this work, the characteristics of industrial low voltage distribution network components and the pilot distribution network are measured and modelled with respect topower-line communications frequencies up to 30 MHz. The distributed inductances, capacitances and attenuation of MCMK type low voltage power cables are measured in the frequency band 100 kHz - 30 MHz and an attenuation formula for the cables is formed based on the measurements. The input impedances of electric motors (15-250 kW) are measured using several signal couplings and measurement based input impedance model for electric motor with a slotted stator is formed. The model is designed for the frequency band 10 kHz - 30 MHz. Next, the effect of DC (direct current) voltage link inverter on power line data transfer is briefly analysed. Finally, a pilot distribution network is formed and signal attenuation in communication channels in the pilot environment is measured. The results are compared with the simulations that are carried out utilising the developed models and measured parameters for cables and motors. In the second part of this work, a narrowband power-line data transfer system is developed for the data transfer ofon-line condition monitoring of electric motors. It is developed using standardintegrated circuits. The system is tested in the pilot environment and the applicability of the system for the data transfer required by the on-line condition monitoring of electric motors is analysed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mechanistic soil-crop models have become indispensable tools to investigate the effect of management practices on the productivity or environmental impacts of arable crops. Ideally these models may claim to be universally applicable because they simulate the major processes governing the fate of inputs such as fertiliser nitrogen or pesticides. However, because they deal with complex systems and uncertain phenomena, site-specific calibration is usually a prerequisite to ensure their predictions are realistic. This statement implies that some experimental knowledge on the system to be simulated should be available prior to any modelling attempt, and raises a tremendous limitation to practical applications of models. Because the demand for more general simulation results is high, modellers have nevertheless taken the bold step of extrapolating a model tested within a limited sample of real conditions to a much larger domain. While methodological questions are often disregarded in this extrapolation process, they are specifically addressed in this paper, and in particular the issue of models a priori parameterisation. We thus implemented and tested a standard procedure to parameterize the soil components of a modified version of the CERES models. The procedure converts routinely-available soil properties into functional characteristics by means of pedo-transfer functions. The resulting predictions of soil water and nitrogen dynamics, as well as crop biomass, nitrogen content and leaf area index were compared to observations from trials conducted in five locations across Europe (southern Italy, northern Spain, northern France and northern Germany). In three cases, the model’s performance was judged acceptable when compared to experimental errors on the measurements, based on a test of the model’s root mean squared error (RMSE). Significant deviations between observations and model outputs were however noted in all sites, and could be ascribed to various model routines. In decreasing importance, these were: water balance, the turnover of soil organic matter, and crop N uptake. A better match to field observations could therefore be achieved by visually adjusting related parameters, such as field-capacity water content or the size of soil microbial biomass. As a result, model predictions fell within the measurement errors in all sites for most variables, and the model’s RMSE was within the range of published values for similar tests. We conclude that the proposed a priori method yields acceptable simulations with only a 50% probability, a figure which may be greatly increased through a posteriori calibration. Modellers should thus exercise caution when extrapolating their models to a large sample of pedo-climatic conditions for which they have only limited information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: During the last part of the 1990s the chance of surviving breast cancer increased. Changes in survival functions reflect a mixture of effects. Both, the introduction of adjuvant treatments and early screening with mammography played a role in the decline in mortality. Evaluating the contribution of these interventions using mathematical models requires survival functions before and after their introduction. Furthermore, required survival functions may be different by age groups and are related to disease stage at diagnosis. Sometimes detailed information is not available, as was the case for the region of Catalonia (Spain). Then one may derive the functions using information from other geographical areas. This work presents the methodology used to estimate age- and stage-specific Catalan breast cancer survival functions from scarce Catalan survival data by adapting the age- and stage-specific US functions. Methods: Cubic splines were used to smooth data and obtain continuous hazard rate functions. After, we fitted a Poisson model to derive hazard ratios. The model included time as a covariate. Then the hazard ratios were applied to US survival functions detailed by age and stage to obtain Catalan estimations. Results: We started estimating the hazard ratios for Catalonia versus the USA before and after the introduction of screening. The hazard ratios were then multiplied by the age- and stage-specific breast cancer hazard rates from the USA to obtain the Catalan hazard rates. We also compared breast cancer survival in Catalonia and the USA in two time periods, before cancer control interventions (USA 1975–79, Catalonia 1980–89) and after (USA and Catalonia 1990–2001). Survival in Catalonia in the 1980–89 period was worse than in the USA during 1975–79, but the differences disappeared in 1990–2001. Conclusion: Our results suggest that access to better treatments and quality of care contributed to large improvements in survival in Catalonia. On the other hand, we obtained detailed breast cancer survival functions that will be used for modeling the effect of screening and adjuvant treatments in Catalonia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A newspaper content management system has to deal with a very heterogeneous information space as the experience in the Diari Segre newspaper has shown us. The greatest problem is to harmonise the different ways the involved users (journalist, archivists...) structure the newspaper information space, i.e. news, topics, headlines, etc. Our approach is based on ontology and differentiated universes of discourse (UoD). Users interact with the system and, from this interaction, integration rules are derived. These rules are based on Description Logic ontological relations for subsumption and equivalence. They relate the different UoD and produce a shared conceptualisation of the newspaper information domain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Parallel T-Coffee (PTC) was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. Results: In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. Conclusions: The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.