951 resultados para knowledge discovery
Resumo:
The traditional business models and the traditionally successful development methods that have been distinctive to the industrial era, do not satisfy the needs of modern IT companies. Due to the rapid nature of IT markets, the uncertainty of new innovations‟ success and the overwhelming competition with established companies, startups need to make quick decisions and eliminate wasted resources more effectively than ever before. There is a need for an empirical basis on which to build business models, as well as evaluate the presumptions regarding value and profit. Less than ten years ago, the Lean software development principles and practices became widely well-known in the academic circles. Those practices help startup entrepreneurs to validate their learning, test their assumptions and be more and more dynamical and flexible. What is special about today‟s software startups is that they are increasingly individual. There are quantitative research studies available regarding the details of Lean startups. Broad research with hundreds of companies presented in a few charts is informative, but a detailed study of fewer examples gives an insight to the way software entrepreneurs see Lean startup philosophy and how they describe it in their own words. This thesis focuses on Lean software startups‟ early phases, namely Customer Discovery (discovering a valuable solution to a real problem) and Customer Validation (being in a good market with a product which satisfies that market). The thesis first offers a sufficiently compact insight into the Lean software startup concept to a reader who is not previously familiar with the term. The Lean startup philosophy is then put into a real-life test, based on interviews with four Finnish Lean software startup entrepreneurs. The interviews reveal 1) whether the Lean startup philosophy is actually valuable for them, 2) how can the theory be practically implemented in real life and 3) does theoretical Lean startup knowledge compensate a lack of entrepreneurship experience. A reader gets familiar with the key elements and tools of Lean startups, as well as their mutual connections. The thesis explains why Lean startups waste less time and money than many other startups. The thesis, especially its research sections, aims at providing data and analysis simultaneously.
Resumo:
Open access iiiovemerit and open source software movement plays an important role in creation of knowledge, knowledge management and knowledge dissemination. Scholarly communication and publishing are increasingly taking place in the electronic environment. With a growing proportion of the scholarly record now existing only in digital format, serious issues regarding access and preservation are being raised that are central to future scholarship. Institutional Repositories provide access to past. present and future scholarly literature and research documentation; ensures its preservation; assists users in discovery and use; and offers educational programs to enable users to develop lifelong literacy. This paper explores these aspects on how IR of Cochin University of Science & Technology supports scientific community for knowledge creation. knowledge Management, and knowledge dissemination.
Resumo:
Resumen tomado de la publicaci??n
Resumo:
There has been a clear lack of common data exchange semantics for inter-organisational workflow management systems where the research has mainly focused on technical issues rather than language constructs. This paper presents the neutral data exchanges semantics required for the workflow integration within the AXAEDIS framework and presents the mechanism for object discovery from the object repository where little or no knowledge about the object is available. The paper also presents workflow independent integration architecture with the AXAEDIS Framework.
Resumo:
Routine computer tasks are often difficult for older adult computer users to learn and remember. People tend to learn new tasks by relating new concepts to existing knowledge. However, even for 'basic' computer tasks there is little, if any, existing knowledge on which older adults can base their learning. This paper investigates a custom file management interface that was designed to aid discovery and learnability by providing interface objects that are familiar to the user. A study was conducted which examined the differences between older and younger computer users when undertaking routine file management tasks using the standard Windows desktop as compared with the custom interface. Results showed that older adult computer users requested help more than ten times as often as younger users when using a standard windows/mouse configuration, made more mistakes and also required significantly more confirmations than younger users. The custom interface showed improvements over standard Windows/mouse, with fewer confirmations and less help being required. Hence, there is potential for an interface that closely mimics the real world to improve computer accessibility for older adults, aiding self-discovery and learnability.
Resumo:
This paper presents a hierarchical clustering method for semantic Web service discovery. This method aims to improve the accuracy and efficiency of the traditional service discovery using vector space model. The Web service is converted into a standard vector format through the Web service description document. With the help of WordNet, a semantic analysis is conducted to reduce the dimension of the term vector and to make semantic expansion to meet the user’s service request. The process and algorithm of hierarchical clustering based semantic Web service discovery is discussed. Validation is carried out on the dataset.
Resumo:
One of the most pervasive classes of services needed to support e-Science applications are those responsible for the discovery of resources. We have developed a solution to the problem of service discovery in a Semantic Web/Grid setting. We do this in the context of bioinformatics, which is the use of computational and mathematical techniques to store, manage, and analyse the data from molecular biology in order to answer questions about biological phenomena. Our specific application is myGrid (www.mygrid.org.uk) that is developing open source, service-based middleware upon which bioinformatics applications can be built. myGrid is specifically targeted at developing open source high-level service Grid middleware for bioinformatics.
Resumo:
Scientific workflows are becoming a valuable tool for scientists to capture and automate e-Science procedures. Their success brings the opportunity to publish, share, reuse and repurpose this explicitly captured knowledge. Within the myGrid project, we have identified key resources that can be shared including complete workflows, fragments of workflows and constituent services. We have examined the alternative ways these can be described by their authors (and subsequent users), and developed a unified descriptive model to support their later discovery. By basing this model on existing standards, we have been able to extend existing Web Service and Semantic Web Service infrastructure whilst still supporting the specific needs of the e-Scientist. myGrid components enable a workflow life-cycle that extends beyond execution, to include discovery of previous relevant designs, reuse of those designs, and subsequent publication. Experience with example groups of scientists indicates that this cycle is valuable. The growing number of workflows and services mean more work is needed to support the user in effective ranking of search results, and to support the repurposing process.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Sickle Cell Disease (SCD) is one of the most prevalent hematological diseases in the world. Despite the immense progress in molecular knowledge about SCD in last years few therapeutical sources are currently available. Nowadays the treatment is performed mainly with drugs such as hydroxyurea or other fetal hemoglobin inducers and chelating agents. This review summarizes current knowledge about the treatment and the advancements in drug design in order to discover more effective and safe drugs. Patient monitoring methods in SCD are also discussed. © 2011 Bentham Science Publishers Ltd.
Resumo:
In the last decade, the reverse vaccinology approach shifted the paradigm of vaccine discovery from conventional culture-based methods to high-throughput genome-based approaches for the development of recombinant protein-based vaccines against pathogenic bacteria. Besides reaching its main goal of identifying new vaccine candidates, this new procedure produced also a huge amount of molecular knowledge related to them. In the present work, we explored this knowledge in a species-independent way and we performed a systematic in silico molecular analysis of more than 100 protective antigens, looking at their sequence similarity, domain composition and protein architecture in order to identify possible common molecular features. This meta-analysis revealed that, beside a low sequence similarity, most of the known bacterial protective antigens shared structural/functional Pfam domains as well as specific protein architectures. Based on this, we formulated the hypothesis that the occurrence of these molecular signatures can be predictive of possible protective properties of other proteins in different bacterial species. We tested this hypothesis in Streptococcus agalactiae and identified four new protective antigens. Moreover, in order to provide a second proof of the concept for our approach, we used Staphyloccus aureus as a second pathogen and identified five new protective antigens. This new knowledge-driven selection process, named MetaVaccinology, represents the first in silico vaccine discovery tool based on conserved and predictive molecular and structural features of bacterial protective antigens and not dependent upon the prediction of their sub-cellular localization.
Resumo:
This thesis aims at investigating methods and software architectures for discovering what are the typical and frequently occurring structures used for organizing knowledge in the Web. We identify these structures as Knowledge Patterns (KPs). KP discovery needs to address two main research problems: the heterogeneity of sources, formats and semantics in the Web (i.e., the knowledge soup problem) and the difficulty to draw relevant boundary around data that allows to capture the meaningful knowledge with respect to a certain context (i.e., the knowledge boundary problem). Hence, we introduce two methods that provide different solutions to these two problems by tackling KP discovery from two different perspectives: (i) the transformation of KP-like artifacts to KPs formalized as OWL2 ontologies; (ii) the bottom-up extraction of KPs by analyzing how data are organized in Linked Data. The two methods address the knowledge soup and boundary problems in different ways. The first method provides a solution to the two aforementioned problems that is based on a purely syntactic transformation step of the original source to RDF followed by a refactoring step whose aim is to add semantics to RDF by select meaningful RDF triples. The second method allows to draw boundaries around RDF in Linked Data by analyzing type paths. A type path is a possible route through an RDF that takes into account the types associated to the nodes of a path. Then we present K~ore, a software architecture conceived to be the basis for developing KP discovery systems and designed according to two software architectural styles, i.e, the Component-based and REST. Finally we provide an example of reuse of KP based on Aemoo, an exploratory search tool which exploits KPs for performing entity summarization.
Resumo:
A web service is a collection of industry standards to enable reusability of services and interoperability of heterogeneous applications. The UMLS Knowledge Source (UMLSKS) Server provides remote access to the UMLSKS and related resources. We propose a Web Services Architecture that encapsulates UMLSKS-API and makes it available in distributed and heterogeneous environments. This is the first step towards intelligent and automatic UMLS services discovery and invocation by computer systems in distributed environments such as web.
Resumo:
High throughput discovery of ligand scaffolds for target proteins can accelerate development of leads and drug candidates enormously. Here we describe an innovative workflow for the discovery of high affinity ligands for the benzodiazepine-binding site on the so far not crystallized mammalian GABAA receptors. The procedure includes chemical biology techniques that may be generally applied to other proteins. Prerequisites are a ligand that can be chemically modified with cysteine-reactive groups, knowledge of amino acid residues contributing to the drug-binding pocket, and crystal structures either of proteins homologous to the target protein or, better, of the target itself. Part of the protocol is virtual screening that without additional rounds of optimization in many cases results only in low affinity ligands, even when a target protein has been crystallized. Here we show how the integration of functional data into structure-based screening dramatically improves the performance of the virtual screening. Thus, lead compounds with 14 different scaffolds were identified on the basis of an updated structural model of the diazepam-bound state of the GABAA receptor. Some of these compounds show considerable preference for the α3β2γ2 GABAA receptor subtype.
Resumo:
The next generation neutrino observatory proposed by the LBNO collaboration will address fundamental questions in particle and astroparticle physics. The experiment consists of a far detector, in its first stage a 20 kt LAr double phase TPC and a magnetised iron calorimeter, situated at 2300 km from CERN and a near detector based on a highpressure argon gas TPC. The long baseline provides a unique opportunity to study neutrino flavour oscillations over their 1st and 2nd oscillation maxima exploring the L/E behaviour, and distinguishing effects arising from δCP and matter. In this paper we have reevaluated the physics potential of this setup for determining the mass hierarchy (MH) and discovering CP-violation (CPV), using a conventional neutrino beam from the CERN SPS with a power of 750 kW. We use conservative assumptions on the knowledge of oscillation parameter priors and systematic uncertainties. The impact of each systematic error and the precision of oscillation prior is shown. We demonstrate that the first stage of LBNO can determine unambiguously the MH to > 5δ C.L. over the whole phase space. We show that the statistical treatment of the experiment is of very high importance, resulting in the conclusion that LBNO has ~ 100% probability to determine the MH in at most 4-5 years of running. Since the knowledge of MH is indispensable to extract δCP from the data, the first LBNO phase can convincingly give evidence for CPV on the 3δ C.L. using today’s knowledge on oscillation parameters and realistic assumptions on the systematic uncertainties.