995 resultados para neighbor discovery


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ultimate goal of this study has been to construct metabolically engineered microbial strains capable of fermenting glucose into pentitols D-arabitol and, especially, xylitol. The path that was chosen to achieve this goal required discovery, isolation and sequencing of at least two pentitol phosphate dehydrogenases of different specificity, followed by cloning and expression of their genes and characterization of recombinant arabitol and xylitol phosphate dehydrogenases. An enzyme of a previously unknown specificity, D-arabitol phosphate dehydrogenase (APDH), was discovered in Enterococcus avium. The enzyme was purified to homogenity from E. avium strain ATCC 33665. SDS/PAGE revealed that the enzyme has a molecular mass of 41 ± 2 kDa, whereas a molecular mass of 160 ± 5 kDa was observed under non-denaturing conditions implying that the APDH may exist as a tetramer with identical subunits. Purified APDH was found to have narrow substrate specificity, converting only D-arabitol 1-phosphate and D-arabitol 5-phosphate into D-xylulose 5-phosphate and D-ribulose 5-phosphate, respectively, in the oxidative reaction. Both NAD+ and NADP+ were accepted as co-factors. Based on the partial protein sequences, the gene encoding APDH was cloned. Homology comparisons place APDH within the medium chain dehydrogenase family. Unlike most members of this family, APDH requires Mn2+ but no Zn2+ for enzymatic activity. The DNA sequence surrounding the gene suggests that it belongs to an operon that also contains several components of phosphotransferase system (PTS). The apparent role of the enzyme is to participate in arabitol catabolism via the arabitol phosphate route similar to the ribitol and xylitol catabolic routes described previously. Xylitol phosphate dehydrogenase (XPDH) was isolated from Lactobacillus rhamnosus strain ATCC 15820. The enzyme was partially sequenced. Amino acid sequences were used to isolate the gene encoding the enzyme. The homology comparisons of the deduced amino acid sequence of L. rhamnosus XPDH revealed several similar enzymes in genomes of various species of Gram-positive bacteria. Two enzymes of Clostridium difficile and an enzyme of Bacillus halodurans were cloned and their substrate specificities together with the substrate specificity of L. rhamnosus XPDH were compared. It was found that one of the XPDH enzymes of C. difficile and the XPDH of L. rhamnosus had the highest selectivity towards D-xylulose 5-phosphate. A known transketolase-deficient and D-ribose-producing mutant of Bacillus subtilis (ATCC 31094) was further modified by disrupting its rpi (D-ribose phosphate isomerase) gene to create D-ribulose- and D-xylulose-producing strain. Expression of APDH of E. avium and XPDH of L. rhamnosus and C. difficile in D-ribulose- and D-xylulose-producing strain of B. subtilis resulted in strains capable of converting D-glucose into D-arabitol and xylitol, respectively. The D-arabitol yield on D-glucose was 38 % (w/w). Xylitol production was accompanied by co-production of ribitol limiting xylitol yield to 23 %.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the last decade, huge breakthroughs in genetics - driven by new technology and different statistical approaches - have resulted in a plethora of new disease genes identified for both common and rare diseases. Massive parallel sequencing, commonly known as next-generation sequencing, is the latest advance in genetics, and has already facilitated the discovery of the molecular cause of many monogenic disorders. This article describes this new technology and reviews how this approach has been used successfully in patients with skeletal dysplasias. Moreover, this article illustrates how the study of rare diseases can inform understanding and therapeutic developments for common diseases such as osteoporosis. © International Osteoporosis Foundation and National Osteoporosis Foundation 2013.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Adjuvants enhance or modify an immune response that is made to an antigen. An antagonist of the chemokine CCR4 receptor can display adjuvant-like properties by diminishing the ability of CD4+CD25+ regulatory T cells (Tregs) to down-regulate immune responses. Methodology: Here, we have used protein modelling to create a plausible chemokine receptor model with the aim of using virtual screening to identify potential small molecule chemokine antagonists. A combination of homology modelling and molecular docking was used to create a model of the CCR4 receptor in order to investigate potential lead compounds that display antagonistic properties. Three-dimensional structure-based virtual screening of the CCR4 receptor identified 116 small molecules that were calculated to have a high affinity for the receptor; these were tested experimentally for CCR4 antagonism. Fifteen of these small molecules were shown to inhibit specifically CCR4-mediated cellmigration, including that of CCR4(+) Tregs. Significance: Our CCR4 antagonists act as adjuvants augmenting human T cell proliferation in an in vitro immune response model and compound SP50 increases T cell and antibody responses in vivo when combined with vaccine antigens of Mycobacterium tuberculosis and Plasmodium yoelii in mice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Importance of the field: The shift in focus from ligand based design approaches to target based discovery over the last two to three decades has been a major milestone in drug discovery research. Currently, it is witnessing another major paradigm shift by leaning towards the holistic systems based approaches rather the reductionist single molecule based methods. The effect of this new trend is likely to be felt strongly in terms of new strategies for therapeutic intervention, new targets individually and in combinations, and design of specific and safer drugs. Computational modeling and simulation form important constituents of new-age biology because they are essential to comprehend the large-scale data generated by high-throughput experiments and to generate hypotheses, which are typically iterated with experimental validation. Areas covered in this review: This review focuses on the repertoire of systems-level computational approaches currently available for target identification. The review starts with a discussion on levels of abstraction of biological systems and describes different modeling methodologies that are available for this purpose. The review then focuses on how such modeling and simulations can be applied for drug target discovery. Finally, it discusses methods for studying other important issues such as understanding targetability, identifying target combinations and predicting drug resistance, and considering them during the target identification stage itself. What the reader will gain: The reader will get an account of the various approaches for target discovery and the need for systems approaches, followed by an overview of the different modeling and simulation approaches that have been developed. An idea of the promise and limitations of the various approaches and perspectives for future development will also be obtained. Take home message: Systems thinking has now come of age enabling a `bird's eye view' of the biological systems under study, at the same time allowing us to `zoom in', where necessary, for a detailed description of individual components. A number of different methods available for computational modeling and simulation of biological systems can be used effectively for drug target discovery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we consider the task of prototype selection whose primary goal is to reduce the storage and computational requirements of the Nearest Neighbor classifier while achieving better classification accuracies. We propose a solution to the prototype selection problem using techniques from cooperative game theory and show its efficacy experimentally.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the problem of discovering business process models from event logs. Existing approaches to this problem strike various tradeoffs between accuracy and understandability of the discovered models. With respect to the second criterion, empirical studies have shown that block-structured process models are generally more understandable and less error-prone than unstructured ones. Accordingly, several automated process discovery methods generate block-structured models by construction. These approaches however intertwine the concern of producing accurate models with that of ensuring their structuredness, sometimes sacrificing the former to ensure the latter. In this paper we propose an alternative approach that separates these two concerns. Instead of directly discovering a structured process model, we first apply a well-known heuristic technique that discovers more accurate but sometimes unstructured (and even unsound) process models, and then transform the resulting model into a structured one. An experimental evaluation shows that our “discover and structure” approach outperforms traditional “discover structured” approaches with respect to a range of accuracy and complexity measures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recommender systems aggregate individual user ratings into predictions of products or services that might interest visitors. The quality of this aggregation process crucially affects the user experience and hence the effectiveness of recommenders in e-commerce. We present a characterization of nearest-neighbor collaborative filtering that allows us to disaggregate global recommender performance measures into contributions made by each individual rating. In particular, we formulate three roles-scouts, promoters, and connectors-that capture how users receive recommendations, how items get recommended, and how ratings of these two types are themselves connected, respectively. These roles find direct uses in improving recommendations for users, in better targeting of items and, most importantly, in helping monitor the health of the system as a whole. For instance, they can be used to track the evolution of neighborhoods, to identify rating subspaces that do not contribute ( or contribute negatively) to system performance, to enumerate users who are in danger of leaving, and to assess the susceptibility of the system to attacks such as shilling. We argue that the three rating roles presented here provide broad primitives to manage a recommender system and its community.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis describes current and past n-in-one methods and presents three early experimental studies using mass spectrometry and the triple quadrupole instrument on the application of n-in-one in drug discovery. N-in-one strategy pools and mix samples in drug discovery prior to measurement or analysis. This allows the most promising compounds to be rapidly identified and then analysed. Nowadays properties of drugs are characterised earlier and in parallel with pharmacological efficacy. Studies presented here use in vitro methods as caco-2 cells and immobilized artificial membrane chromatography for drug absorption and lipophilicity measurements. The high sensitivity and selectivity of liquid chromatography mass spectrometry are especially important for new analytical methods using n-in-one. In the first study, the fragmentation patterns of ten nitrophenoxy benzoate compounds, serial homology, were characterised and the presence of the compounds was determined in a combinatorial library. The influence of one or two nitro substituents and the alkyl chain length of methyl to pentyl on collision-induced fragmentation was studied, and interesting structurefragmentation relationships were detected. Two nitro group compounds increased fragmentation compared to one nitro group, whereas less fragmentation was noted in molecules with a longer alkyl chain. The most abundant product ions were nitrophenoxy ions, which were also tested in the precursor ion screening of the combinatorial library. In the second study, the immobilized artificial membrane chromatographic method was transferred from ultraviolet detection to mass spectrometric analysis and a new method was developed. Mass spectra were scanned and the chromatographic retention of compounds was analysed using extract ion chromatograms. When changing detectors and buffers and including n-in-one in the method, the results showed good correlation. Finally, the results demonstrated that mass spectrometric detection with gradient elution can provide a rapid and convenient n-in-one method for ranking the lipophilic properties of several structurally diverse compounds simultaneously. In the final study, a new method was developed for caco-2 samples. Compounds were separated by liquid chromatography and quantified by selected reaction monitoring using mass spectrometry. This method was used for caco-2 samples, where absorption of ten chemically and physiologically different compounds was screened using both single and nin- one approaches. These three studies used mass spectrometry for compound identification, method transfer and quantitation in the area of mixture analysis. Different mass spectrometric scanning modes for the triple quadrupole instrument were used in each method. Early drug discovery with n-in-one is area where mass spectrometric analysis, its possibilities and proper use, is especially important.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Market microstructure is “the study of the trading mechanisms used for financial securities” (Hasbrouck (2007)). It seeks to understand the sources of value and reasons for trade, in a setting with different types of traders, and different private and public information sets. The actual mechanisms of trade are a continually changing object of study. These include continuous markets, auctions, limit order books, dealer markets, or combinations of these operating as a hybrid market. Microstructure also has to allow for the possibility of multiple prices. At any given time an investor may be faced with a multitude of different prices, depending on whether he or she is buying or selling, the quantity he or she wishes to trade, and the required speed for the trade. The price may also depend on the relationship that the trader has with potential counterparties. In this research, I touch upon all of the above issues. I do this by studying three specific areas, all of which have both practical and policy implications. First, I study the role of information in trading and pricing securities in markets with a heterogeneous population of traders, some of whom are informed and some not, and who trade for different private or public reasons. Second, I study the price discovery of stocks in a setting where they are simultaneously traded in more than one market. Third, I make a contribution to the ongoing discussion about market design, i.e. the question of which trading systems and ways of organizing trading are most efficient. A common characteristic throughout my thesis is the use of high frequency datasets, i.e. tick data. These datasets include all trades and quotes in a given security, rather than just the daily closing prices, as in traditional asset pricing literature. This thesis consists of four separate essays. In the first essay I study price discovery for European companies cross-listed in the United States. I also study explanatory variables for differences in price discovery. In my second essay I contribute to earlier research on two issues of broad interest in market microstructure: market transparency and informed trading. I examine the effects of a change to an anonymous market at the OMX Helsinki Stock Exchange. I broaden my focus slightly in the third essay, to include releases of macroeconomic data in the United States. I analyze the effect of these releases on European cross-listed stocks. The fourth and last essay examines the uses of standard methodologies of price discovery analysis in a novel way. Specifically, I study price discovery within one market, between local and foreign traders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this thesis is to examine the role of trade durations in price discovery. The motivation to use trade durations in the study of price discovery is that durations are robust to many microstructure effects that introduce a bias in the measurement of returns volatility. Another motivation to use trade durations in the study of price discovery is that it is difficult to think of economic variables, which really are useful in the determination of the source of volatility at arbitrarily high frequencies. The dissertation contains three essays. In the first essay, the role of trade durations in price discovery is examined with respect to the volatility pattern of stock returns. The theory on volatility is associated with the theory on the information content of trade, dear to the market microstructure theory. The first essay documents that the volatility per transaction is related to the intensity of trade, and a strong relationship between the stochastic process of trade durations and trading variables. In the second essay, the role of trade durations in price discovery is examined with respect to the quantification of risk due to a trading volume of a certain size. The theory on volume is intrinsically associated with the stock volatility pattern. The essay documents that volatility increases, in general, when traders choose to trade with large transactions. In the third essay, the role of trade durations in price discovery is examined with respect to the information content of a trade. The theory on the information content of a trade is associated with the theory on the rate of price revisions in the market. The essay documents that short durations are associated with information. Thus, traders are compensated for responding quickly to information

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A two-stage iterative algorithm for selecting a subset of a training set of samples for use in a condensed nearest neighbor (CNN) decision rule is introduced. The proposed method uses the concept of mutual nearest neighborhood for selecting samples close to the decision line. The efficacy of the algorithm is brought out by means of an example.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Enzymes offer many advantages in industrial processes, such as high specificity, mild treatment conditions and low energy requirements. Therefore, the industry has exploited them in many sectors including food processing. Enzymes can modify food properties by acting on small molecules or on polymers such as carbohydrates or proteins. Crosslinking enzymes such as tyrosinases and sulfhydryl oxidases catalyse the formation of novel covalent bonds between specific residues in proteins and/or peptides, thus forming or modifying the protein network of food. In this study, novel secreted fungal proteins with sequence features typical of tyrosinases and sulfhydryl oxidases were iden-tified through a genome mining study. Representatives of both of these enzyme families were selected for heterologous produc-tion in the filamentous fungus Trichoderma reesei and biochemical characterisation. Firstly, a novel family of putative tyrosinases carrying a shorter sequence than the previously characterised tyrosinases was discovered. These proteins lacked the whole linker and C-terminal domain that possibly play a role in cofactor incorporation, folding or protein activity. One of these proteins, AoCO4 from Aspergillus oryzae, was produced in T. reesei with a production level of about 1.5 g/l. The enzyme AoCO4 was correctly folded and bound the copper cofactors with a type-3 copper centre. However, the enzyme had only a low level of activity with the phenolic substrates tested. Highest activity was obtained with 4-tert-butylcatechol. Since tyrosine was not a substrate for AoCO4, the enzyme was classified as catechol oxidase. Secondly, the genome analysis for secreted proteins with sequence features typical of flavin-dependent sulfhydryl oxidases pinpointed two previously uncharacterised proteins AoSOX1 and AoSOX2 from A. oryzae. These two novel sulfhydryl oxidases were produced in T. reesei with production levels of 70 and 180 mg/l, respectively, in shake flask cultivations. AoSOX1 and AoSOX2 were FAD-dependent enzymes with a dimeric tertiary structure and they both showed activity on small sulfhydryl compounds such as glutathione and dithiothreitol, and were drastically inhibited by zinc sulphate. AoSOX2 showed good stabil-ity to thermal and chemical denaturation, being superior to AoSOX1 in this respect. Thirdly, the suitability of AoSOX1 as a possible baking improver was elucidated. The effect of AoSOX1, alone and in combi-nation with the widely used improver ascorbic acid was tested on yeasted wheat dough, both fresh and frozen, and on fresh water-flour dough. In all cases, AoSOX1 had no effect on the fermentation properties of fresh yeasted dough. AoSOX1 nega-tively affected the fermentation properties of frozen doughs and accelerated the damaging effects of the frozen storage, i.e. giving a softer dough with poorer gas retention abilities than the control. In combination with ascorbic acid, AoSOX1 gave harder doughs. In accordance, rheological studies in yeast-free dough showed that the presence of only AoSOX1 resulted in weaker and more extensible dough whereas a dough with opposite properties was obtained if ascorbic acid was also used. Doughs containing ascorbic acid and increasing amounts of AoSOX1 were harder in a dose-dependent manner. Sulfhydryl oxidase AoSOX1 had an enhancing effect on the dough hardening mechanism of ascorbic acid. This was ascribed mainly to the produc-tion of hydrogen peroxide in the SOX reaction which is able to convert the ascorbic acid to the actual improver dehydroascorbic acid. In addition, AoSOX1 could possibly oxidise the free glutathione in the dough and thus prevent the loss of dough strength caused by the spontaneous reduction of the disulfide bonds constituting the dough protein network. Sulfhydryl oxidase AoSOX1 is therefore able to enhance the action of ascorbic acid in wheat dough and could potentially be applied in wheat dough baking.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let n points be placed independently in d-dimensional space according to the density f(x) = A(d)e(-lambda parallel to x parallel to alpha), lambda, alpha > 0, x is an element of R-d, d >= 2. Let d(n) be the longest edge length of the nearest-neighbor graph on these points. We show that (lambda(-1) log n)(1-1/alpha) d(n) - b(n) converges weakly to the Gumbel distribution, where b(n) similar to ((d - 1)/lambda alpha) log log n. We also prove the following strong law for the normalized nearest-neighbor distance (d) over tilde (n) = (lambda(-1) log n)(1-1/alpha) d(n)/log log n: (d - 1)/alpha lambda <= lim inf(n ->infinity) (d) over tilde (n) <= lim sup(n ->infinity) (d) over tilde (n) <= d/alpha lambda almost surely. Thus, the exponential rate of decay alpha = 1 is critical, in the sense that, for alpha > 1, d(n) -> 0, whereas, for alpha <= 1, d(n) -> infinity almost surely as n -> infinity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The management and coordination of business-process collaboration experiences changes because of globalization, specialization, and innovation. Service-oriented computing (SOC) is a means towards businessprocess automation and recently, many industry standards emerged to become part of the service-oriented architecture (SOA) stack. In a globalized world, organizations face new challenges for setting up and carrying out collaborations in semi-automating ecosystems for business services. For being efficient and effective, many companies express their services electronically in what we term business-process as a service (BPaaS). Companies then source BPaaS on the fly from third parties if they are not able to create all service-value inhouse because of reasons such as lack of reasoures, lack of know-how, cost- and time-reduction needs. Thus, a need emerges for BPaaS-HUBs that not only store service offers and requests together with information about their issuing organizations and assigned owners, but that also allow an evaluation of trust and reputation in an anonymized electronic service marketplace. In this paper, we analyze the requirements, design architecture and system behavior of such a BPaaS-HUB to enable a fast setup and enactment of business-process collaboration. Moving into a cloud-computing setting, the results of this paper allow system designers to quickly evaluate which services they need for instantiationg the BPaaS-HUB architecture. Furthermore, the results also show what the protocol of a backbone service bus is that allows a communication between services that implement the BPaaS-HUB. Finally, the paper analyzes where an instantiation must assign additional computing resources vor the avoidance of performance bottlenecks.