953 resultados para open data capabilities


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Minor lymphocyte stimulating (Mls) antigens specifically stimulate T cell responses that are restricted to particular T cell receptor (TCR) beta chain variable domains. The Mls phenotype is genetically controlled by an open reading frame (orf) located in the 3' long terminal repeat of mouse mammary tumor virus (MMTV); however, the mechanism of action of the orf gene product is unknown. Whereas predicted orf amino acid sequences show strong overall homology, the 20-30 COOH-terminal residues are strikingly polymorphic. This polymorphic region correlates with TCR V beta specificity. We have generated monoclonal antibodies to a synthetic peptide encompassing the 19 COOH-terminal amino acid residues of Mtv-7 orf, which encodes the Mls-1a determinant. We show here that these antibodies block Mls responses in vitro and can interfere specifically with thymic clonal deletion of Mls-1a reactive V beta 6+ T cells in neonatal mice. Furthermore, the antibodies can inhibit V beta 6+ T cell responses in vivo to an infectious MMTV that shares orf sequence homology and TCR specificity with Mtv-7. These results confirm the predicted extracellular localization of the orf COOH terminus and imply that the orf proteins of both endogenous and exogenous MMTV interact directly with TCR V beta.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While the Internet has given educators access to a steady supply of Open Educational Resources, the educational rubrics commonly shared on the Web are generally in the form of static, non-semantic presentational documents or in the proprietary data structures of commercial content and learning management systems.With the advent of Semantic Web Standards, producers of online resources have a new framework to support the open exchange of software-readable datasets. Despite these advances, the state of the art of digital representation of rubrics as sharable documents has not progressed.This paper proposes an ontological model for digital rubrics. This model is built upon the Semantic Web Standards of the World Wide Web Consortium (W3C), principally the Resource Description Framework (RDF) and Web Ontology Language (OWL).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In an earlier investigation (Burger et al., 2000) five sediment cores near the RodriguesTriple Junction in the Indian Ocean were studied applying classical statistical methods(fuzzy c-means clustering, linear mixing model, principal component analysis) for theextraction of endmembers and evaluating the spatial and temporal variation ofgeochemical signals. Three main factors of sedimentation were expected by the marinegeologists: a volcano-genetic, a hydro-hydrothermal and an ultra-basic factor. Thedisplay of fuzzy membership values and/or factor scores versus depth providedconsistent results for two factors only; the ultra-basic component could not beidentified. The reason for this may be that only traditional statistical methods wereapplied, i.e. the untransformed components were used and the cosine-theta coefficient assimilarity measure.During the last decade considerable progress in compositional data analysis was madeand many case studies were published using new tools for exploratory analysis of thesedata. Therefore it makes sense to check if the application of suitable data transformations,reduction of the D-part simplex to two or three factors and visualinterpretation of the factor scores would lead to a revision of earlier results and toanswers to open questions . In this paper we follow the lines of a paper of R. Tolosana-Delgado et al. (2005) starting with a problem-oriented interpretation of the biplotscattergram, extracting compositional factors, ilr-transformation of the components andvisualization of the factor scores in a spatial context: The compositional factors will beplotted versus depth (time) of the core samples in order to facilitate the identification ofthe expected sources of the sedimentary process.Kew words: compositional data analysis, biplot, deep sea sediments

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compositional data naturally arises from the scientific analysis of the chemicalcomposition of archaeological material such as ceramic and glass artefacts. Data of thistype can be explored using a variety of techniques, from standard multivariate methodssuch as principal components analysis and cluster analysis, to methods based upon theuse of log-ratios. The general aim is to identify groups of chemically similar artefactsthat could potentially be used to answer questions of provenance.This paper will demonstrate work in progress on the development of a documentedlibrary of methods, implemented using the statistical package R, for the analysis ofcompositional data. R is an open source package that makes available very powerfulstatistical facilities at no cost. We aim to show how, with the aid of statistical softwaresuch as R, traditional exploratory multivariate analysis can easily be used alongside, orin combination with, specialist techniques of compositional data analysis.The library has been developed from a core of basic R functionality, together withpurpose-written routines arising from our own research (for example that reported atCoDaWork'03). In addition, we have included other appropriate publicly availabletechniques and libraries that have been implemented in R by other authors. Availablefunctions range from standard multivariate techniques through to various approaches tolog-ratio analysis and zero replacement. We also discuss and demonstrate a smallselection of relatively new techniques that have hitherto been little-used inarchaeometric applications involving compositional data. The application of the libraryto the analysis of data arising in archaeometry will be demonstrated; results fromdifferent analyses will be compared; and the utility of the various methods discussed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

”compositions” is a new R-package for the analysis of compositional and positive data.It contains four classes corresponding to the four different types of compositional andpositive geometry (including the Aitchison geometry). It provides means for computation,plotting and high-level multivariate statistical analysis in all four geometries.These geometries are treated in an fully analogous way, based on the principle of workingin coordinates, and the object-oriented programming paradigm of R. In this way,called functions automatically select the most appropriate type of analysis as a functionof the geometry. The graphical capabilities include ternary diagrams and tetrahedrons,various compositional plots (boxplots, barplots, piecharts) and extensive graphical toolsfor principal components. Afterwards, ortion and proportion lines, straight lines andellipses in all geometries can be added to plots. The package is accompanied by ahands-on-introduction, documentation for every function, demos of the graphical capabilitiesand plenty of usage examples. It allows direct and parallel computation inall four vector spaces and provides the beginner with a copy-and-paste style of dataanalysis, while letting advanced users keep the functionality and customizability theydemand of R, as well as all necessary tools to add own analysis routines. A completeexample is included in the appendix

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research work deals with the problem of modeling and design of low level speed controller for the mobile robot PRIM. The main objective is to develop an effective educational tool. On one hand, the interests in using the open mobile platform PRIM consist in integrating several highly related subjects to the automatic control theory in an educational context, by embracing the subjects of communications, signal processing, sensor fusion and hardware design, amongst others. On the other hand, the idea is to implement useful navigation strategies such that the robot can be served as a mobile multimedia information point. It is in this context, when navigation strategies are oriented to goal achievement, that a local model predictive control is attained. Hence, such studies are presented as a very interesting control strategy in order to develop the future capabilities of the system

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The statistical analysis of compositional data should be treated using logratios of parts,which are difficult to use correctly in standard statistical packages. For this reason afreeware package, named CoDaPack was created. This software implements most of thebasic statistical methods suitable for compositional data.In this paper we describe the new version of the package that now is calledCoDaPack3D. It is developed in Visual Basic for applications (associated with Excel©),Visual Basic and Open GL, and it is oriented towards users with a minimum knowledgeof computers with the aim at being simple and easy to use.This new version includes new graphical output in 2D and 3D. These outputs could bezoomed and, in 3D, rotated. Also a customization menu is included and outputs couldbe saved in jpeg format. Also this new version includes an interactive help and alldialog windows have been improved in order to facilitate its use.To use CoDaPack one has to access Excel© and introduce the data in a standardspreadsheet. These should be organized as a matrix where Excel© rows correspond tothe observations and columns to the parts. The user executes macros that returnnumerical or graphical results. There are two kinds of numerical results: new variablesand descriptive statistics, and both appear on the same sheet. Graphical output appearsin independent windows. In the present version there are 8 menus, with a total of 38submenus which, after some dialogue, directly call the corresponding macro. Thedialogues ask the user to input variables and further parameters needed, as well aswhere to put these results. The web site http://ima.udg.es/CoDaPack contains thisfreeware package and only Microsoft Excel© under Microsoft Windows© is required torun the software.Kew words: Compositional data Analysis, Software

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper shows the impact of the atomic capabilities concept to include control-oriented knowledge of linear control systems in the decisions making structure of physical agents. These agents operate in a real environment managing physical objects (e.g. their physical bodies) in coordinated tasks. This approach is presented using an introspective reasoning approach and control theory based on the specific tasks of passing a ball and executing the offside manoeuvre between physical agents in the robotic soccer testbed. Experimental results and conclusions are presented, emphasising the advantages of our approach that improve the multi-agent performance in cooperative systems

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fecal incontinence (FI) is the involuntary loss of rectal contents through the anal canal. Reports of its prevalence vary from 1-21%. Studies, have demonstrated a positive effect on FI symptoms with injectable bulking agents. This study evaluated the safety and efficacy of NASHA/Dx gel in the treatment of FI. One hundred fifteen eligible patients suffering from FI received 4 injections of 1 mL NASHA/Dx gel. Primary efficacy was based on data from 86 patients that completed the study. This study demonstrated a ≥50% reduction from baseline in the number of FI episodes in 57.1% of patients at 6 months, and 64.0% at 12 months. Significant improvements (P < .001) were also noted in total number of both solid and loose FI episodes, FI free days, CCFIS, and FIQL scores in all 4 domains. The majority of the treatment related AEs (94.9%) were mild or moderate intensity, and (98.7%) of AEs resolved spontaneously, or following treatment, without sequelae. Results of this study indicate NASHA/Dx gel was efficacious in the treatment of FI. Treatment effect was significant both in reduction of number of FI episodes and disease specific quality of life at 6 months and lasted up to 12 months after treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La infraestructura europea ICOS (Integrated Carbon Observation System), tiene como misión proveer de mediciones de gases de efecto invernadero a largo plazo, lo que ha de permitir estudiar el estado actual y comportamiento futuro del ciclo global del carbono. En este contexto, geomati.co ha desarrollado un portal de búsqueda y descarga de datos que integra las mediciones realizadas en los ámbitos terrestre, marítimo y atmosférico, disciplinas que hasta ahora habían gestionado los datos de forma separada. El portal permite hacer búsquedas por múltiples ámbitos geográficos, por rango temporal, por texto libre o por un subconjunto de magnitudes, realizar vistas previas de los datos, y añadir los conjuntos de datos que se crean interesantes a un “carrito” de descargas. En el momento de realizar la descarga de una colección de datos, se le asignará un identificador universal que permitirá referenciarla en eventuales publicaciones, y repetir su descarga en el futuro (de modo que los experimentos publicados sean reproducibles). El portal se apoya en formatos abiertos de uso común en la comunidad científica, como el formato NetCDF para los datos, y en el perfil ISO de CSW, estándar de catalogación y búsqueda propio del ámbito geoespacial. El portal se ha desarrollado partiendo de componentes de software libre existentes, como Thredds Data Server, GeoNetwork Open Source y GeoExt, y su código y documentación quedarán publicados bajo una licencia libre para hacer posible su reutilización en otros proyecto

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this project a research both in finding predictors via clustering techniques and in reviewing the Data Mining free software is achieved. The research is based in a case of study, from where additionally to the KDD free software used by the scientific community; a new free tool for pre-processing the data is presented. The predictors are intended for the e-learning domain as the data from where these predictors have to be inferred are student qualifications from different e-learning environments. Through our case of study not only clustering algorithms are tested but also additional goals are proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Researchers working in the field of global connectivity analysis using diffusion magnetic resonance imaging (MRI) can count on a wide selection of software packages for processing their data, with methods ranging from the reconstruction of the local intra-voxel axonal structure to the estimation of the trajectories of the underlying fibre tracts. However, each package is generally task-specific and uses its own conventions and file formats. In this article we present the Connectome Mapper, a software pipeline aimed at helping researchers through the tedious process of organising, processing and analysing diffusion MRI data to perform global brain connectivity analyses. Our pipeline is written in Python and is freely available as open-source at www.cmtk.org.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE OF REVIEW: Adherence to preventive measures and prescribed medications is the cornerstone of the successful management of hypertension. The role of adherence is particularly important when treatments are not providing the expected clinical results, for example, in patients with resistant hypertension. The goal of this article is to review the recent observations regarding drug adherence in resistant hypertension. RECENT FINDINGS: Today, the role of drug adherence as a potential cause of resistant hypertension is largely underestimated. Most studies suggest that a low adherence to the prescribed medications can affect up to 50% of patients with resistant hypertension.A good adherence to therapy is generally associated with an improved prognosis. Nonetheless, adherence should probably not be a target for treatment per se because data on adherence should always be interpreted in the view of clinical results. In our opinion, the availability of reliable data on drug adherence would be a major help for physicians to manage patients apparently resistant to therapy. SUMMARY: The actual development of new drugs for hypertension is slow. Thus, focusing on drug adherence to the drugs available is an important way to improve blood pressure control in the population. More emphasis should be put on measuring drug adherence in patients with resistant hypertension to avoid costly investigations and treatments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Phase-IV, open-label, single-arm study (NCT01203917) to assess efficacy and safety/tolerability of first-line gefitinib in Caucasian patients with stage IIIA/B/IV, epidermal growth factor receptor (EGFR) mutation-positive non-small-cell lung cancer (NSCLC). METHODS Treatment: gefitinib 250 mg day(-1) until progression. Primary endpoint: objective response rate (ORR). Secondary endpoints: disease control rate (DCR), progression-free survival (PFS), overall survival (OS) and safety/tolerability. Pre-planned exploratory objective: EGFR mutation analysis in matched tumour and plasma samples. RESULTS Of 1060 screened patients with NSCLC (859 known mutation status; 118 positive, mutation frequency 14%), 106 with EGFR sensitising mutations were enrolled (female 70.8%; adenocarcinoma 97.2%; never-smoker 64.2%). At data cutoff: ORR 69.8% (95% confidence interval (CI) 60.5-77.7), DCR 90.6% (95% CI 83.5-94.8), median PFS 9.7 months (95% CI 8.5-11.0), median OS 19.2 months (95% CI 17.0-NC; 27% maturity). Most common adverse events (AEs; any grade): rash (44.9%), diarrhoea (30.8%); CTC (Common Toxicity Criteria) grade 3/4 AEs: 15%; SAEs: 19%. Baseline plasma 1 samples were available in 803 patients (784 known mutation status; 82 positive; mutation frequency 10%). Plasma 1 EGFR mutation test sensitivity: 65.7% (95% CI 55.8-74.7). CONCLUSION First-line gefitinib was effective and well tolerated in Caucasian patients with EGFR mutation-positive NSCLC. Plasma samples could be considered for mutation analysis if tumour tissue is unavailable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The reason for this study is to propose a new quantitative approach on how to assess the quality of Open Access University Institutional Repositories. The results of this new approach are tested in the Spanish University Repositories. The assessment method is based in a binary codification of a proposal of features that objectively describes the repositories. The purposes of this method are assessing the quality and an almost automatically system for updating the data of the characteristics. First of all a database was created with the 38 Spanish institutional repositories. The variables of analysis are presented and explained either if they are coming from bibliography or are a set of new variables. Among the characteristics analyzed are the features of the software, the services of the repository, the features of the information system, the Internet visibility and the licenses of use. Results from Spanish universities ARE provided as a practical example of the assessment and for having a picture of the state of the development of the open access movement in Spain.