990 resultados para Discovery Tools


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The emergence of mass spectrometry-based proteomics has revolutionized the study of proteins and their abundances, functions, interactions, and modifications. However, in a multicellular organism, it is difficult to monitor dynamic changes in protein synthesis in a specific cell type within its native environment. In this thesis, we describe methods that enable the metabolic labeling, purification, and analysis of proteins in specific cell types and during defined periods in live animals. We first engineered a eukaryotic phenylalanyl-tRNA synthetase (PheRS) to selectively recognize the unnatural L-phenylalanine analog p-azido-L-phenylalanine (Azf). Using Caenorhabditis elegans, we expressed the engineered PheRS in a cell type of choice (i.e. body wall muscles, intestinal epithelial cells, neurons, pharyngeal muscles), permitting proteins in those cells -- and only those cells -- to be labeled with azides. Labeled proteins are therefore subject to "click" conjugation to cyclooctyne-functionalized affnity probes, separation from the rest of the protein pool and identification by mass spectrometry. By coupling our methodology with heavy isotopic labeling, we successfully identified proteins -- including proteins with previously unknown expression patterns -- expressed in targeted subsets of cells. While cell types like body wall or pharyngeal muscles can be targeted with a single promoter, many cells cannot; spatiotemporal selectivity typically results from the combinatorial action of multiple regulators. To enhance spatiotemporal selectivity, we next developed a two-component system to drive overlapping -- but not identical -- patterns of expression of engineered PheRS, restricting labeling to cells that express both elements. Specifically, we developed a split-intein-based split-PheRS system for highly efficient PheRS-reconstitution through protein splicing. Together, these tools represent a powerful approach for unbiased discovery of proteins uniquely expressed in a subset of cells at specific developmental stages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Decades of costly failures in translating drug candidates from preclinical disease models to human therapeutic use warrant reconsideration of the priority placed on animal models in biomedical research. Following an international workshop attended by experts from academia, government institutions, research funding bodies, and the corporate and nongovernmental organisation (NGO) sectors, in this consensus report, we analyse, as case studies, five disease areas with major unmet needs for new treatments. In view of the scientifically driven transition towards a human pathway-based paradigm in toxicology, a similar paradigm shift appears to be justified in biomedical research. There is a pressing need for an approach that strategically implements advanced, human biology-based models and tools to understand disease pathways at multiple biological scales. We present recommendations to help achieve this.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The majority of the world’s citizens now live in cities. Although urban planning can thus be thought of as a field with significant ramifications on the human condition, many practitioners feel that it has reached the crossroads in thought leadership between traditional practice and a new, more participatory and open approach. Conventional ways to engage people in participatory planning exercises are limited in reach and scope. At the same time, socio-cultural trends and technology innovation offer opportunities to re-think the status quo in urban planning. Neogeography introduces tools and services that allow non-geographers to use advanced geographical information systems. Similarly, is there potential for the emergence of a neo-planning paradigm in which urban planning is carried out through active civic engagement aided by Web 2.0 and new media technologies thus redefining the role of practicing planners? This paper traces a number of evolving links between urban planning, neogeography and information and communication technology. Two significant trends – participation and visualisation – with direct implications for urban planning are discussed. Combining advanced participation and visualisation features, the popular virtual reality environment Second Life is then introduced as a test bed to explore a planning workshop and an integrated software event framework to assist narrative generation. We discuss an approach to harness and analyse narratives using virtual reality logging to make transparent how users understand and interpret proposed urban designs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Single nucleotide polymorphisms (SNPs) are unique genetic differences between individuals that contribute in significant ways to the determination of human variation including physical characteristics like height and appearance as well as less obvious traits such as personality, behaviour and disease susceptibility. SNPs can also significantly influence responses to pharmacotherapy and whether drugs will produce adverse reactions. The development of new drugs can be made far cheaper and more rapid by selecting participants in drug trials based on their genetically determined response to drugs. Technology that can rapidly and inexpensively genotype thousands of samples for thousands of SNPs at a time is therefore in high demand. With the completion of the human genome project, about 12 million true SNPs have been identified to date. However, most have not yet been associated with disease susceptibility or drug response. Testing for the appropriate drug response SNPs in a patient requiring treatment would enable individualised therapy with the right drug and dose administered correctly the first time. Many pharmaceutical companies are also interested in identifying SNPs associated with polygenic traits so novel therapeutic targets can be discovered. This review focuses on technologies that can be used for genotyping known SNPs as well as for the discovery of novel SNPs associated with drug response.