926 resultados para Unicode Common Locale Data Repository


Relevância:

30.00% 30.00%

Publicador:

Resumo:

MetaNetX is a repository of genome-scale metabolic networks (GSMNs) and biochemical pathways from a number of major resources imported into a common namespace of chemical compounds, reactions, cellular compartments-namely MNXref-and proteins. The MetaNetX.org website (http://www.metanetx.org/) provides access to these integrated data as well as a variety of tools that allow users to import their own GSMNs, map them to the MNXref reconciliation, and manipulate, compare, analyze, simulate (using flux balance analysis) and export the resulting GSMNs. MNXref and MetaNetX are regularly updated and freely available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Life sciences are yielding huge data sets that underpin scientific discoveries fundamental to improvement in human health, agriculture and the environment. In support of these discoveries, a plethora of databases and tools are deployed, in technically complex and diverse implementations, across a spectrum of scientific disciplines. The corpus of documentation of these resources is fragmented across the Web, with much redundancy, and has lacked a common standard of information. The outcome is that scientists must often struggle to find, understand, compare and use the best resources for the task at hand.Here we present a community-driven curation effort, supported by ELIXIR-the European infrastructure for biological information-that aspires to a comprehensive and consistent registry of information about bioinformatics resources. The sustainable upkeep of this Tools and Data Services Registry is assured by a curation effort driven by and tailored to local needs, and shared amongst a network of engaged partners.As of November 2015, the registry includes 1785 resources, with depositions from 126 individual registrations including 52 institutional providers and 74 individuals. With community support, the registry can become a standard for dissemination of information about bioinformatics resources: we welcome everyone to join us in this common endeavour. The registry is freely available at https://bio.tools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Colorectal cancer (CRC) is the third most common cancer and the fourth leading cause of cancer death worldwide. About 85% of the cases of CRC are known to have chromosomal instability, an allelic imbalance at several chromosomal loci, and chromosome amplification and translocation. The aim of this study is to determine the recurrent copy number variant (CNV) regions present in stage II of CRC through whole exome sequencing, a rapidly developing targeted next-generation sequencing (NGS) technology that provides an accurate alternative approach for accessing genomic variations. 42 normal-tumor paired samples were sequenced by Illumina Genome Analyzer. Data was analyzed with Varscan2 and segmentation was performed with R package R-GADA. Summary of the segments across all samples was performed and the result was overlapped with DEG data of the same samples from a previous study in the group1. Major and more recurrent segments of CNV were: gain of chromosome 7pq(13%), 13q(31%) and 20q(75%) and loss of 8p(25%), 17p(23%), and 18pq(27%). This results are coincident with the known literature of CNV in CRC or other cancers, but our methodology should be validated by array comparative genomic hybridisation (aCGH) profiling, which is currently the gold standard for genetic diagnosis of CNV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Breast cancer is the most common diagnosed cancer and the leading cause of cancer death among females worldwide. It is considered a highly heterogeneous disease and it must be classified into more homogeneous groups. Hence, the purpose of this study was to classify breast tumors based on variations in gene expression patterns derived from RNA sequencing by using different class discovery methods. 42 breast tumors paired-samples were sequenced by Illumine Genome Analyzer and the data was analyzed and prepared by TopHat2 and htseq-count. As reported previously, breast cancer could be grouped into five main groups known as basal epithelial-like group, HER2 group, normal breast-like group and two Luminal groups with a distinctive expression profile. Classifying breast tumor samples by using PAM50 method, the most common subtype was Luminal B and was significantly associated with ESR1 and ERBB2 high expression. Luminal A subtype had ESR1 and SLC39A6 significant high expression, whereas HER2 subtype had a high expression of ERBB2 and CNNE1 genes and low luminal epithelial gene expression. Basal-like and normal-like subtypes were associated with low expression of ESR1, PgR and HER2, and had significant high expression of cytokeratins 5 and 17. Our results were similar compared with TGCA breast cancer data results and with known studies related with breast cancer classification. Classifying breast tumors could add significant prognostic and predictive information to standard parameters, and moreover, identify marker genes for each subtype to find a better therapy for patients with breast cancer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper re-examines the null of stationary of real exchange rate for a panel of seventeen OECD developed countries during the post-Bretton Woods era. Our analysis simultaneously considers both the presence of cross-section dependence and multiple structural breaks that have not received much attention in previous panel methods of long-run PPP. Empirical results indicate that there is little evidence in favor of PPP hypothesis when the analysis does not account for structural breaks. This conclusion is reversed when structural breaks are considered in computation of the panel statistics. We also compute point estimates of half-life separately for idiosyncratic and common factor components and find that it is always below one year.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Computerised databases of primary care clinical records are widely used for epidemiological research. In Catalonia, the InformationSystem for the Development of Research in Primary Care (SIDIAP) aims to promote the development of research based on high-quality validated data from primary care electronic medical records. Objective The purpose of this study is to create and validate a scoring system (Registry Quality Score, RQS) that will enable all primary care practices (PCPs) to be selected as providers of researchusable data based on the completeness of their registers. Methods Diseases that were likely to be representative of common diagnoses seen in primary care were selected for RQS calculations. The observed/ expected cases ratio was calculated for each disease. Once we had obtained an estimated value for this ratio for each of the selected conditions we added up the ratios calculated for each condition to obtain a final RQS. Rate comparisons between observed and published prevalences of diseases not included in the RQS calculations (atrial fibrillation, diabetes, obesity, schizophrenia, stroke, urinary incontinenceand Crohn’s disease) were used to set the RQS cutoff which will enable researchers to select PCPs with research-usable data. Results Apart from Crohn’s disease, all prevalences were the same as those published from the RQS fourth quintile (60th percentile) onwards. This RQS cut-off provided a total population of 1 936 443 (39.6% of the total SIDIAP population). Conclusions SIDIAP is highly representative of the population of Catalonia in terms of geographical, age and sex distributions. We report the usefulness of rate comparison as a valid method to establish research-usable data within primary care electronic medical records

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent years have produced great advances in the instrumentation technology. The amount of available data has been increasing due to the simplicity, speed and accuracy of current spectroscopic instruments. Most of these data are, however, meaningless without a proper analysis. This has been one of the reasons for the overgrowing success of multivariate handling of such data. Industrial data is commonly not designed data; in other words, there is no exact experimental design, but rather the data have been collected as a routine procedure during an industrial process. This makes certain demands on the multivariate modeling, as the selection of samples and variables can have an enormous effect. Common approaches in the modeling of industrial data are PCA (principal component analysis) and PLS (projection to latent structures or partial least squares) but there are also other methods that should be considered. The more advanced methods include multi block modeling and nonlinear modeling. In this thesis it is shown that the results of data analysis vary according to the modeling approach used, thus making the selection of the modeling approach dependent on the purpose of the model. If the model is intended to provide accurate predictions, the approach should be different than in the case where the purpose of modeling is mostly to obtain information about the variables and the process. For industrial applicability it is essential that the methods are robust and sufficiently simple to apply. In this way the methods and the results can be compared and an approach selected that is suitable for the intended purpose. Differences in data analysis methods are compared with data from different fields of industry in this thesis. In the first two papers, the multi block method is considered for data originating from the oil and fertilizer industries. The results are compared to those from PLS and priority PLS. The third paper considers applicability of multivariate models to process control for a reactive crystallization process. In the fourth paper, nonlinear modeling is examined with a data set from the oil industry. The response has a nonlinear relation to the descriptor matrix, and the results are compared between linear modeling, polynomial PLS and nonlinear modeling using nonlinear score vectors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Especially in global enterprises, key data is fragmented in multiple Enterprise Resource Planning (ERP) systems. Thus the data is inconsistent, fragmented and redundant across the various systems. Master Data Management (MDM) is a concept, which creates cross-references between customers, suppliers and business units, and enables corporate hierarchies and structures. The overall goal for MDM is the ability to create an enterprise-wide consistent data model, which enables analyzing and reporting customer and supplier data. The goal of the study was defining the properties and success factors of a master data system. The theoretical background was based on literature and the case consisted of enterprise specific needs and demands. The theoretical part presents the concept, background, and principles of MDM and then the phases of system planning and implementation project. Case consists of background, definition of as is situation, definition of project, evaluation criterions and concludes the key results of the thesis. In the end chapter Conclusions combines common principles with the results of the case. The case part ended up dividing important factors of the system in success factors, technical requirements and business benefits. To clarify the project and find funding for the project, business benefits have to be defined and the realization has to be monitored. The thesis found out six success factors for the MDM system: Well defined business case, data management and monitoring, data models and structures defined and maintained, customer and supplier data governance, delivery and quality, commitment, and continuous communication with business. Technical requirements emerged several times during the thesis and therefore those can’t be ignored in the project. Conclusions chapter goes through these factors on a general level. The success factors and technical requirements are related to the essentials of MDM: Governance, Action and Quality. This chapter could be used as guidance in a master data management project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis consists of three main theoretical themes: quality of data, success of information systems, and metadata in data warehousing. Loosely defined, metadata is descriptive data about data, and, in this thesis, master data means reference data about customers, products etc. The objective of the thesis is to contribute to an implementation of a metadata management solution for an industrial enterprise. The metadata system incorporates a repository, integration, delivery and access tools, as well as semantic rules and procedures for master data maintenance. It targets to improve maintenance processes and quality of hierarchical master data in the case company’s informational systems. That should bring benefits to whole organization in improved information quality, especially in cross-system data consistency, and in more efficient and effective data management processes. As the result of this thesis, the requirements for the metadata management solution in case were compiled, and the success of the new information system and the implementation project was evaluated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: In recent decades, early diagnosis of childhood cancer has taken an important place on the international agenda. The authors of this study evaluated a group of medical students in Recife, Brazil, regarding knowledge and practices related to early diagnosis of common childhood cancers. METHODS: Cross-sectional study with a sample of 82 medical students, from a total of 86 eligible subjects. Data were collected using self-completed questionnaires. Subgroups were defined according to knowledge of the theme and students' perceptions of their own skills and interest in learning. RESULTS: 74.4% of the sample demonstrated a minimum level of knowledge. The group without minimum knowledge or self-perceived competence to identify suspected cases (23.3%) was in the worst position to perform early diagnosis. All subjects expressed interest in learning more about this topic. CONCLUSIONS: Despite acceptable levels of knowledge among these medical students, the definition of central aspects of the teaching and learning processes would be useful for training physicians with the skills for diagnosing and treating pediatric cancers

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Yrityksen yhteisellä liiketoimintanäkemyksellä tarkoitetaan organisaation kykyä ymmärtää liiketoiminnan olennaiset elementit, ja varmistaa, että työntekijöillä ja yrityksen asiakkailla on positiivinen ja yhdenmukainen kuva ja kokemus kyseisestä organisaatiosta. Tämän Pro-gradu – tutkielman tuloksena kehitettiin mittari, jolla yhteisen liiketoimintanäkemyksen tilaa voidaan yrityksessä mitata. Lisäksi tutkielma selvittää tietojohtamisen merkitystä yhteisen liiketoimintanäkemyksen kehityksessä. Tutkimusaineisto kerättiin Internet -kyselytutkimuksella, johon saatiin 158 vastausta. Aineisto analysoitiin tilastollisilla menetelmillä. Tutkimustulokset viittaavat vahvasti siihen, että tiedon jakamisella ja verkostoitumisella on tilastollisesti merkittävä vaikutus yhteisen liiketoimintanäkemyksen kehittymisessä. Tästä syystä yritysten tulisi integroida tietojohtamisen periaatteet strategioihinsa ja luoda systemaattinen malli, joka kannustaa organisaatiota tiedon jakamiseen ja verkostoitumiseen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

After decades of mergers and acquisitions and successive technology trends such as CRM, ERP and DW, the data in enterprise systems is scattered and inconsistent. Global organizations face the challenge of addressing local uses of shared business entities, such as customer and material, and at the same time have a consistent, unique, and consolidate view of financial indicators. In addition, current enterprise systems do not accommodate the pace of organizational changes and immense efforts are required to maintain data. When it comes to systems integration, ERPs are considered “closed” and expensive. Data structures are complex and the “out-of-the-box” integration options offered are not based on industry standards. Therefore expensive and time-consuming projects are undertaken in order to have required data flowing according to business processes needs. Master Data Management (MDM) emerges as one discipline focused on ensuring long-term data consistency. Presented as a technology-enabled business discipline, it emphasizes business process and governance to model and maintain the data related to key business entities. There are immense technical and organizational challenges to accomplish the “single version of the truth” MDM mantra. Adding one central repository of master data might prove unfeasible in a few scenarios, thus an incremental approach is recommended, starting from areas most critically affected by data issues. This research aims at understanding the current literature on MDM and contrasting it with views from professionals. The data collected from interviews revealed details on the complexities of data structures and data management practices in global organizations, reinforcing the call for more in-depth research on organizational aspects of MDM. The most difficult piece of master data to manage is the “local” part, the attributes related to the sourcing and storing of materials in one particular warehouse in The Netherlands or a complex set of pricing rules for a subsidiary of a customer in Brazil. From a practical perspective, this research evaluates one MDM solution under development at a Finnish IT solution-provider. By means of applying an existing assessment method, the research attempts at providing the company with one possible tool to evaluate its product from a vendor-agnostics perspective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis work describes the creation of a pipework data structure for design system integration. Work is completed in pulp and paper plant delivery company with global engineering network operations in mind. User case of process design to 3D pipework design is introduced with influence of subcontracting engineering offices. Company data element list is gathered by using key person interviews and results are processed into a pipework data element list. Inter-company co-operation is completed in standardization association and common standard for pipework data elements is found. As result inter-company created pipework data element list is introduced. Further list usage, development and relations to design software vendors are evaluated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014