917 resultados para MODEL SEARCH
Resumo:
The Two State model describes how drugs activate receptors by inducing or supporting a conformational change in the receptor from “off” to “on”. The beta 2 adrenergic receptor system is the model system which was used to formalize the concept of two states, and the mechanism of hormone agonist stimulation of this receptor is similar to ligand activation of other seven transmembrane receptors. Hormone binding to beta 2 adrenergic receptors stimulates the intracellular production of cyclic adenosine monophosphate (cAMP), which is mediated through the stimulatory guanyl nucleotide binding protein (Gs) interacting with the membrane bound enzyme adenylylcyclase (AC). ^ The effects of cAMP include protein phosphorylation, metabolic regulation and transcriptional regulation. The beta 2 adrenergic receptor system is the most well known of its family of G protein coupled receptors. Ligands have been scrutinized extensively in search of more effective therapeutic agents at this receptor as well as for insight into the biochemical mechanism of receptor activation. Hormone binding to receptor is thought to induce a conformational change in the receptor that increases its affinity for inactive Gs, catalyzes the release of GDP and subsequent binding of GTP and activation of Gs. ^ However, some beta 2 ligands are more efficient at this transformation than others, and the underlying mechanism for this drug specificity is not fully understood. The central problem in pharmacology is the characterization of drugs in their effect on physiological systems, and consequently, the search for a rational scale of drug effectiveness has been the effort of many investigators, which continues to the present time as models are proposed, tested and modified. ^ The major results of this thesis show that for many b2 -adrenergic ligands, the Two State model is quite adequate to explain their activity, but dobutamine (+/−3,4-dihydroxy-N-[3-(4-hydroxyphenyl)-1-methylpropyl]- b -phenethylamine) fails to conform to the predictions of the Two State model. It is a weak partial agonist, but it forms a large amount of high affinity complexes, and these complexes are formed at low concentrations much better than at higher concentrations. Finally, dobutamine causes the beta 2 adrenergic receptor to form high affinity complexes at a much faster rate than can be accounted for by its low efficiency activating AC. Because the Two State model fails to predict the activity of dobutamine in three different ways, it has been disproven in its strictest form. ^
Resumo:
In many developing countries, clusters of small shops are the typical market-place. We investigate an economic model in which, between buyers and sellers in a marketplace, a circular causality including the search process produces agglomeration forces, given the initial location of the marketplace location exogenously in a linear city. We conclude that initial number of buyers and sellers is important in forming a large marketplace.
Resumo:
This article presents the model of a multi-agent system (SMAF), which objectives are the input of fuzzy incidents as the human experts express them with different severities degrees and the further search and suggestion of solutions. The solutions will be later confirm or not by the users. This model was designed, implemented and tested in the telecommunications field, with heterogeneous agents in a cooperative model. In the design, different abstract levels where considered, according to the agents? objectives, their ways to carry it out and the environment in which they act. Each agent is modeled with different spectrum of the knowledge base
Resumo:
The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help teachers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with a two-fold objective: a) To develop a model for teaching and evaluating core competences that is useful and easily applicable to its different degrees, and b) to provide support to teachers by creating an area within the Website for Educational Innovation where they can search for information on the model corresponding to each core competence approved by UPM. Information available on each competence includes its definition, the formulation of indicators providing evidence on the level of acquisition, the recommended teaching and evaluation methodology, examples of evaluation rules for the different levels of competence acquisition, and descriptions of best practices. These best practices correspond to pilot tests applied to several of the academic subjects conducted at UPM in order to validate the model. This work describes the general procedure that was used and presents the model developed specifically for the problem-solving competence. Some of the pilot experiences are also summarised and their results analysed
Resumo:
The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help teachers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with a two-fold objective: a) To develop a model for teaching and evaluating core competences that is useful and easily applicable to its different degrees, and b) to provide support to teachers by creating an area within the Website for Educational Innovation where they can search for information on the model corresponding to each core competence approved by UPM. Information available on each competence includes its definition, the formulation of indicators providing evidence on the level of acquisition, the recommended teaching and evaluation methodology, examples of evaluation rules for the different levels of competence acquisition, and descriptions of best practices. These best practices correspond to pilot tests applied to several of the academic subjects conducted at UPM in order to validate the model. This work describes the general procedure that was used and presents the model developed specifically for the problem-solving competence. Some of the pilot experiences are also summarised and their results analysed
Resumo:
Purpose: A fully three-dimensional (3D) massively parallelizable list-mode ordered-subsets expectation-maximization (LM-OSEM) reconstruction algorithm has been developed for high-resolution PET cameras. System response probabilities are calculated online from a set of parameters derived from Monte Carlo simulations. The shape of a system response for a given line of response (LOR) has been shown to be asymmetrical around the LOR. This work has been focused on the development of efficient region-search techniques to sample the system response probabilities, which are suitable for asymmetric kernel models, including elliptical Gaussian models that allow for high accuracy and high parallelization efficiency. The novel region-search scheme using variable kernel models is applied in the proposed PET reconstruction algorithm. Methods: A novel region-search technique has been used to sample the probability density function in correspondence with a small dynamic subset of the field of view that constitutes the region of response (ROR). The ROR is identified around the LOR by searching for any voxel within a dynamically calculated contour. The contour condition is currently defined as a fixed threshold over the posterior probability, and arbitrary kernel models can be applied using a numerical approach. The processing of the LORs is distributed in batches among the available computing devices, then, individual LORs are processed within different processing units. In this way, both multicore and multiple many-core processing units can be efficiently exploited. Tests have been conducted with probability models that take into account the noncolinearity, positron range, and crystal penetration effects, that produced tubes of response with varying elliptical sections whose axes were a function of the crystal's thickness and angle of incidence of the given LOR. The algorithm treats the probability model as a 3D scalar field defined within a reference system aligned with the ideal LOR. Results: This new technique provides superior image quality in terms of signal-to-noise ratio as compared with the histogram-mode method based on precomputed system matrices available for a commercial small animal scanner. Reconstruction times can be kept low with the use of multicore, many-core architectures, including multiple graphic processing units. Conclusions: A highly parallelizable LM reconstruction method has been proposed based on Monte Carlo simulations and new parallelization techniques aimed at improving the reconstruction speed and the image signal-to-noise of a given OSEM algorithm. The method has been validated using simulated and real phantoms. A special advantage of the new method is the possibility of defining dynamically the cut-off threshold over the calculated probabilities thus allowing for a direct control on the trade-off between speed and quality during the reconstruction.
Resumo:
Evaluating and measuring the pedagogical quality of Learning Objects is essential for achieving a successful web-based education. On one hand, teachers need some assurance of quality of the teaching resources before making them part of the curriculum. On the other hand, Learning Object Repositories need to include quality information into the ranking metrics used by the search engines in order to save users time when searching. For these reasons, several models such as LORI (Learning Object Review Instrument) have been proposed to evaluate Learning Object quality from a pedagogical perspective. However, no much effort has been put in defining and evaluating quality metrics based on those models. This paper proposes and evaluates a set of pedagogical quality metrics based on LORI. The work exposed in this paper shows that these metrics can be effectively and reliably used to provide quality-based sorting of search results. Besides, it strongly evidences that the evaluation of Learning Objects from a pedagogical perspective can notably enhance Learning Object search if suitable evaluations models and quality metrics are used. An evaluation of the LORI model is also described. Finally, all the presented metrics are compared and a discussion on their weaknesses and strengths is provided.
Resumo:
This paper describes the GTH-UPM system for the Albayzin 2014 Search on Speech Evaluation. Teh evaluation task consists of searching a list of terms/queries in audio files. The GTH-UPM system we are presenting is based on a LVCSR (Large Vocabulary Continuous Speech Recognition) system. We have used MAVIR corpus and the Spanish partition of the EPPS (European Parliament Plenary Sessions) database for training both acoustic and language models. The main effort has been focused on lexicon preparation and text selection for the language model construction. The system makes use of different lexicon and language models depending on the task that is performed. For the best configuration of the system on the development set, we have obtained a FOM of 75.27 for the deyword spotting task.
Resumo:
Acknowledgments Financial Support: HERU and HSRU receive a core grant from the Chief Scientist’s Office of the Scottish Government Health and Social Care Directorates, and the Centre for Clinical epidemiology & Evaluation is funded by Vancouver Coastal Health Authority. The model used for the illustrative case study in this paper was developed as part of a NHS Technology Assessment Review, funded by the National Institute for Health Research (NIHR) Health Technology Assessment Program (project number 09/146/01). The views and opinions expressed in this paper are those of the authors and do not necessarily reflect those of the Scottish Government, NHS, Vancouver Coastal Health, NIHR HTA Program or the Department of Health. The authors wish to thank Kathleen Boyd and members of the audience at the UK Health Economists Study Group, for comments received on an earlier version of this paper. We also wish to thank Cynthia Fraser (University of Aberdeen) for literature searches undertaken to inform the manuscript, and Mohsen Sadatsafavi (University of British Columbia) for comments on an earlier draft
Resumo:
To initiate homologous recombination, sequence similarity between two DNA molecules must be searched for and homology recognized. How the search for and recognition of homology occurs remains unproven. We have examined the influences of DNA topology and the polarity of RecA–single-stranded (ss)DNA filaments on the formation of synaptic complexes promoted by RecA. Using two complementary methods and various ssDNA and duplex DNA molecules as substrates, we demonstrate that topological constraints on a small circular RecA–ssDNA filament prevent it from interwinding with its duplex DNA target at the homologous region. We were unable to detect homologous pairing between a circular RecA–ssDNA filament and its relaxed or supercoiled circular duplex DNA targets. However, the formation of synaptic complexes between an invading linear RecA–ssDNA filament and covalently closed circular duplex DNAs is promoted by supercoiling of the duplex DNA. The results imply that a triplex structure formed by non-Watson–Crick hydrogen bonding is unlikely to be an intermediate in homology searching promoted by RecA. Rather, a model in which RecA-mediated homology searching requires unwinding of the duplex DNA coupled with local strand exchange is the likely mechanism. Furthermore, we show that polarity of the invading RecA–ssDNA does not affect its ability to pair and interwind with its circular target duplex DNA.
Resumo:
We introduce a computational method to optimize the in vitro evolution of proteins. Simulating evolution with a simple model that statistically describes the fitness landscape, we find that beneficial mutations tend to occur at amino acid positions that are tolerant to substitutions, in the limit of small libraries and low mutation rates. We transform this observation into a design strategy by applying mean-field theory to a structure-based computational model to calculate each residue's structural tolerance. Thermostabilizing and activity-increasing mutations accumulated during the experimental directed evolution of subtilisin E and T4 lysozyme are strongly directed to sites identified by using this computational approach. This method can be used to predict positions where mutations are likely to lead to improvement of specific protein properties.
Resumo:
In optimal foraging theory, search time is a key variable defining the value of a prey type. But the sensory-perceptual processes that constrain the search for food have rarely been considered. Here we evaluate the flight behavior of bumblebees (Bombus terrestris) searching for artificial flowers of various sizes and colors. When flowers were large, search times correlated well with the color contrast of the targets with their green foliage-type background, as predicted by a model of color opponent coding using inputs from the bees' UV, blue, and green receptors. Targets that made poor color contrast with their backdrop, such as white, UV-reflecting ones, or red flowers, took longest to detect, even though brightness contrast with the background was pronounced. When searching for small targets, bees changed their strategy in several ways. They flew significantly slower and closer to the ground, so increasing the minimum detectable area subtended by an object on the ground. In addition, they used a different neuronal channel for flower detection. Instead of color contrast, they used only the green receptor signal for detection. We relate these findings to temporal and spatial limitations of different neuronal channels involved in stimulus detection and recognition. Thus, foraging speed may not be limited only by factors such as prey density, flight energetics, and scramble competition. Our results show that understanding the behavioral ecology of foraging can substantially gain from knowledge about mechanisms of visual information processing.
Resumo:
The folding mechanism of a 125-bead heteropolymer model for proteins is investigated with Monte Carlo simulations on a cubic lattice. Sequences that do and do not fold in a reasonable time are compared. The overall folding behavior is found to be more complex than that of models for smaller proteins. Folding begins with a rapid collapse followed by a slow search through the semi-compact globule for a sequence-dependent stable core with about 30 out of 176 native contacts which serves as the transition state for folding to a near-native structure. Efficient search for the core is dependent on structural features of the native state. Sequences that fold have large amounts of stable, cooperative structure that is accessible through short-range initiation sites, such as those in anti-parallel sheets connected by turns. Before folding is completed, the system can encounter a second bottleneck, involving the condensation and rearrangement of surface residues. Overly stable local structure of the surface residues slows this stage of the folding process. The relation of the results from the 125-mer model studies to the folding of real proteins is discussed.
Resumo:
Speech recognition involves three processes: extraction of acoustic indices from the speech signal, estimation of the probability that the observed index string was caused by a hypothesized utterance segment, and determination of the recognized utterance via a search among hypothesized alternatives. This paper is not concerned with the first process. Estimation of the probability of an index string involves a model of index production by any given utterance segment (e.g., a word). Hidden Markov models (HMMs) are used for this purpose [Makhoul, J. & Schwartz, R. (1995) Proc. Natl. Acad. Sci. USA 92, 9956-9963]. Their parameters are state transition probabilities and output probability distributions associated with the transitions. The Baum algorithm that obtains the values of these parameters from speech data via their successive reestimation will be described in this paper. The recognizer wishes to find the most probable utterance that could have caused the observed acoustic index string. That probability is the product of two factors: the probability that the utterance will produce the string and the probability that the speaker will wish to produce the utterance (the language model probability). Even if the vocabulary size is moderate, it is impossible to search for the utterance exhaustively. One practical algorithm is described [Viterbi, A. J. (1967) IEEE Trans. Inf. Theory IT-13, 260-267] that, given the index string, has a high likelihood of finding the most probable utterance.
Resumo:
Data from the HEGRA air shower array are used to set an upper limit on the emission of gamma-radiation above 25 (18) TeV from the direction of the radio bright region DR4 within the SNR G78.2 + 2.1 of 2.5 (7.1). 10^-13 cm^-2 sec^-1. The shock front of SNR G78.2 + 2.1 probably recently overtook the molecular cloud Gong 8 which then acts as a target for the cosmic rays produced within the SNR, thus leading to the expectation of enhanced gamma-radiation. Using a model of Drury, Aharonian and Völk which assumes that SNRs are the sources of galactic cosmic rays via first order Fermi acceleration, we calculated a theoretical prediction for the gamma-ray flux from the DR4 region and compared it with our experimental flux limit. Our 'best estimate' value for the predicted flux lies a factor of about 18 above the upper limit for gamma-ray energies above 25 TeV. Possible reasons for this discrepancy are discussed.