979 resultados para MOST PROBABLE NUMBER
Resumo:
In this second part of our study on the mechanism of perceived brightness, we explore the effects of manipulating three-dimensional geometry. The additional scenes portrayed here demonstrate that the same luminance profile can elicit different sensations of brightness as a function of how the objects in the scene are arranged in space. This further evidence confirms the implication of the scenes presented in the accompanying paper, namely that sensations of relative brightness—including standard demonstrations of simultaneous brightness contrast—cannot arise by computations of local contrast. The most plausible explanation of the full range of perceptual phenomena we have described is an empirical strategy that links the luminance profile in a visual stimulus with an association (the percept) that represents the profile’s most probable real-world source.
Resumo:
A “most probable state” equilibrium statistical theory for random distributions of hetons in a closed basin is developed here in the context of two-layer quasigeostrophic models for the spreading phase of open-ocean convection. The theory depends only on bulk conserved quantities such as energy, circulation, and the range of values of potential vorticity in each layer. The simplest theory is formulated for a uniform cooling event over the entire basin that triggers a homogeneous random distribution of convective towers. For a small Rossby deformation radius typical for open-ocean convection sites, the most probable states that arise from this theory strongly resemble the saturated baroclinic states of the spreading phase of convection, with a stabilizing barotropic rim current and localized temperature anomaly.
Resumo:
A de novo sequencing program for proteins is described that uses tandem MS data from electron capture dissociation and collisionally activated dissociation of electrosprayed protein ions. Computer automation is used to convert the fragment ion mass values derived from these spectra into the most probable protein sequence, without distinguishing Leu/Ile. Minimum human input is necessary for the data reduction and interpretation. No extra chemistry is necessary to distinguish N- and C-terminal fragments in the mass spectra, as this is determined from the electron capture dissociation data. With parts-per-million mass accuracy (now available by using higher field Fourier transform MS instruments), the complete sequences of ubiquitin (8.6 kDa) and melittin (2.8 kDa) were predicted correctly by the program. The data available also provided 91% of the cytochrome c (12.4 kDa) sequence (essentially complete except for the tandem MS-resistant region K13–V20 that contains the cyclic heme). Uncorrected mass values from a 6-T instrument still gave 86% of the sequence for ubiquitin, except for distinguishing Gln/Lys. Extensive sequencing of larger proteins should be possible by applying the algorithm to pieces of ≈10-kDa size, such as products of limited proteolysis.
Resumo:
The availability of complete genome sequences and mRNA expression data for all genes creates new opportunities and challenges for identifying DNA sequence motifs that control gene expression. An algorithm, “MobyDick,” is presented that decomposes a set of DNA sequences into the most probable dictionary of motifs or words. This method is applicable to any set of DNA sequences: for example, all upstream regions in a genome or all genes expressed under certain conditions. Identification of words is based on a probabilistic segmentation model in which the significance of longer words is deduced from the frequency of shorter ones of various lengths, eliminating the need for a separate set of reference data to define probabilities. We have built a dictionary with 1,200 words for the 6,000 upstream regulatory regions in the yeast genome; the 500 most significant words (some with as few as 10 copies in all of the upstream regions) match 114 of 443 experimentally determined sites (a significance level of 18 standard deviations). When analyzing all of the genes up-regulated during sporulation as a group, we find many motifs in addition to the few previously identified by analyzing the subclusters individually to the expression subclusters. Applying MobyDick to the genes derepressed when the general repressor Tup1 is deleted, we find known as well as putative binding sites for its regulatory partners.
Resumo:
Speech recognition involves three processes: extraction of acoustic indices from the speech signal, estimation of the probability that the observed index string was caused by a hypothesized utterance segment, and determination of the recognized utterance via a search among hypothesized alternatives. This paper is not concerned with the first process. Estimation of the probability of an index string involves a model of index production by any given utterance segment (e.g., a word). Hidden Markov models (HMMs) are used for this purpose [Makhoul, J. & Schwartz, R. (1995) Proc. Natl. Acad. Sci. USA 92, 9956-9963]. Their parameters are state transition probabilities and output probability distributions associated with the transitions. The Baum algorithm that obtains the values of these parameters from speech data via their successive reestimation will be described in this paper. The recognizer wishes to find the most probable utterance that could have caused the observed acoustic index string. That probability is the product of two factors: the probability that the utterance will produce the string and the probability that the speaker will wish to produce the utterance (the language model probability). Even if the vocabulary size is moderate, it is impossible to search for the utterance exhaustively. One practical algorithm is described [Viterbi, A. J. (1967) IEEE Trans. Inf. Theory IT-13, 260-267] that, given the index string, has a high likelihood of finding the most probable utterance.
Resumo:
Background: Acetylation and deacetylation at specific lysine (K) residues is mediated by histone acetylases (HATs) and deacetylases (HDACs), respectively. HATs and HDACs act on both histone and non-histone proteins, regulating various processes, including cardiac impulse propagation. Aim of the present work was to establish whether the function of the Ca2+ ATPase SERCA2, one of the major players in Ca2+ reuptake during excitation-contraction coupling in cardiac myocytes (CMs), could be modulated by direct K acetylation. Materials and methods: HL-1 atrial mouse cells (donated by Prof. Claycomb), zebrafish and Streptozotocin-induced diabetic rat CMs were treated with the pan-inhibitor of class I and II HDACs suberanilohydroxamic acid (SAHA) for 1.5 hour. Evaluation of SERCA2 acetylation was analyzed by co-immunoprecipitation. SERCA2 activity was measured on microsomes by pyruvate/NADH coupled reaction assay. SERCA2 mutants were obtained after cloning wild-type and mutated sequences into the pCDNA3 vector and transfected into HEK cells. Ca2+ transients in CMs (loading with Fluo3-AM, field stimulation, 0.5 Hz) and in transfected HEK cells (loading with FLUO-4, caffeine pulse) were recorded. Results: Co-Immunoprecipitation experiments performed on HL-1 cells demonstrated a significant increase in the acetylation of SERCA2 after SAHA-treatment (2.5 µM, n=3). This was associated with an increase in SERCA2 activity in microsomes obtained from HL-1 cells, after SAHA exposure (n=5). Accordingly, SAHA-treatment significantly shortened the Ca2+ reuptake time of adult zebrafish CMs. Further, SAHA 2.5 nM restored to control values the recovery time of Ca2+ transients decay in diabetic rat CMs. HDAC inhibition also improved contraction parameters, such as fraction of shortening, and increased pump activity in microsomes isolated from diabetic CMs (n=4). Notably, the K464, identified by bioinformatic tools as the most probable acetylation site on human SERCA2a, was mutated into Glutamine (Q) or Arginine (R) mimicking acetylation and deacetylation respectively. Measurements of Ca2+ transients in HEK cells revealed that the substitution of K464 with R significantly delayed the transient recovery time, thus indicating that deacetylation has a negative impact on SERCA2 function. Conclusions: Our results indicate that SERCA2 function can be improved by pro-acetylation interventions and that this mechanism of regulation is conserved among species. Therefore, the present work provides the basis to open the search for novel pharmacological tools able to specifically improve SERCA2 activity in diseases where its expression and/or function is impaired, such as diabetic cardiomyopathy.
Resumo:
Context. Galaxies, which often contain ionised gas, sometimes also exhibit a so-called low-ionisation nuclear emission line region (LINER). For 30 years, this was attributed to a central mass-accreting supermassive black hole (more commonly known as active galactic nucleus, AGN) of low luminosity, making LINER galaxies the largest AGN sub-population, which dominate in numbers over higher luminosity Seyfert galaxies and quasars. This, however, poses a serious problem. While the inferred energy balance is plausible, many LINERs clearly do not contain any other independent signatures of an AGN. Aims. Using integral field spectroscopic data from the CALIFA survey, we compare the observed radial surface brightness profiles with what is expected from illumination by an AGN. Methods. Essential for this analysis is a proper extraction of emission lines, especially weak lines, such as Balmer H beta lines, which are superposed on an absorption trough. To accomplish this, we use the GANDALF code, which simultaneously fits the underlying stellar continuum and emission lines. Results. For 48 galaxies with LINER-like emission, we show that the radial emission-line surface brightness profiles are inconsistent with ionisation by a central point-source and hence cannot be due to an AGN alone. Conclusions. The most probable explanation for the excess LINER-like emission is ionisation by evolved stars during the short but very hot and energetic phase known as post-AGB. This leads us to an entirely new interpretation. Post-AGB stars are ubiquitous and their ionising effect should be potentially observable in every galaxy with the gas present and with stars older than ~1 Gyr unless a stronger radiation field from young hot stars or an AGN outshines them. This means that galaxies with LINER-like emission are not a class defined by a property but rather by the absence of a property. It also explains why LINER emission is observed mostly in massive galaxies with old stars and little star formation.
Resumo:
Aims. We investigated in detail the system WDS 19312+3607, whose primary is an active M4.5Ve star previously inferred to be young (τ ~ 300–500 Ma) based on its high X-ray luminosity. Methods. We collected intermediate- and low-resolution optical spectra taken with 2 m-class telescopes, photometric data from the B to 8 μm bands, and data for eleven astrometric epochs with a time baseline of over 56 years for the two components in the system, G 125–15 and G 125–14. Results. We derived the M4.5V spectral types of both stars, confirmed their common proper motion, estimated their heliocentric distance and projected physical separation, determined their Galactocentric space velocities, and deduced a most-probable age of older than 600 Ma. We discovered that the primary, G 125–15, is an inflated, double-lined, spectroscopic binary with a short period of photometric variability of 1.6 d, which we associated with orbital synchronisation. The observed X-ray and Hα emissions, photometric variability, and abnormal radius and effective temperature of G 125–15 AB are indicative of strong magnetic activity, possibly because of the rapid rotation. In addition, the estimated projected physical separation between G 125–15 AB and G 125–14 of about 1200 AU ensures that WDS 19312+3607 is one of the widest systems with intermediate M-type primaries. Conclusions. G 125–15 AB is a nearby (d ≈ 26 pc), bright (J ≈ 9.6 mag), active spectroscopic binary with a single proper-motion companion of the same spectral type at a wide separation. They are thus ideal targets for specific follow-ups to investigate wide and close multiplicity or stellar expansion and surface cooling because of the lower convective efficiency.
Resumo:
A dinâmica ambiental possui capacidade limitada de reciclagem e a crescente utilização resíduos agroindustriais, especialmente na agricultura, pode levar a situações de poluição do solo e demais componentes ambientais. A manutenção da produtividade de ecossistemas agrícolas e naturais depende do processo de transformação da matéria orgânica e, por conseguinte, da biomassa microbiana do solo, e que é responsável pela decomposição e mineralização de resíduos no mesmo. A dinâmica natural dos microrganismos do solo, em constante mudança e adaptação, os torna um indicador sensível às mudanças resultantes de diferentes práticas de manejo agrícola. Sendo assim, conhecer essas alterações e suas interferências é fundamental para identificar estratégias adequadas de manejo, apontando técnicas de utilização adequadas. O objetivo deste trabalho foi avaliar a qualidade de um solo agrícola, cultivado com três variedades de cana-de-açúcar (Saccharum spp.), comparando a utilização de adubação mineral frente à utilização de fertilizante orgânico composto no período final de formação dos perfilhos (120 dias após o plantio). Foi implantado, em condições de campo, o cultivo de cana-de-açúcar (cana planta), utilizando as variedades RB 867515, RB 962869 e RB 855453, onde cada variedade foi cultivada de três formas distintas, sendo elas: plantio controle (CT) sem aplicação de insumos para adubação; plantio orgânico (OG) com aplicação de fertilizante orgânico; e plantio convencional (CV) com aplicação de adubação mineral, seguindo recomendações de adubação após análise química inicial do solo local. Cada parcela possuía 37 m2, com 5 sulcos de 5,0 m de comprimento e espaçamento de 1,5 m entrelinhas, sendo os três sulcos centrais formando a área útil. De acordo com a variedade e o tipo de adubação, foram formados nove tratamentos: T1 86CT, T2 96CT, T3 85CT, T4 6OG, T5 96OG, T6 85OG, T7 86CV, T8 96CV e T9 85CV, com delineamento estatístico de blocos ao acaso e quatro repetições. Os parâmetros químicos do solo analisados foram macronutrientes e micronutrientes; os parâmetros microbiológicos foram carbono da biomassa microbiana (CBM), respiração basal do solo (RBS), quociente metabólico (qCO2), número mais provável de fungos e bactérias do solo (NMP); e, por fim, a produtividade agrícola (t/ha). Os resultados foram submetidos a análise de variância (ANOVA) e à comparação das médias através do teste de Tukey (10%). Também foi realizada a análise de variância dos dados e correlação cofenética de Pearson para formação de dendogramas. Com base no período estudado, considerado como fase crítica da formação do canavial, concluiu-se que os parâmetros químicos que evidenciaram alterações no solo foram pH e os macronutrientes Mg, Al e SB, sendo os tratamentos orgânicos equivalentes e/ou melhores que os tratamentos convencionais. Para os parâmetros microbiológicos, o NMP de fungos apresentou os maiores valores nos tratamentos convencionais e controle. A produtividade agrícola não foi influenciada pelos diferentes tratamentos e insumos utilizados, independente da variedade de cana-de-açúcar utilizada. Por fim, foram observadas correlações positivas entre as variáveis CTC e quociente metabólico (qCO2) apontando potencial melhoria da qualidade do solo, com o emprego de insumos orgânicos
Resumo:
In the Bayesian framework, predictions for a regression problem are expressed in terms of a distribution of output values. The mode of this distribution corresponds to the most probable output, while the uncertainty associated with the predictions can conveniently be expressed in terms of error bars. In this paper we consider the evaluation of error bars in the context of the class of generalized linear regression models. We provide insights into the dependence of the error bars on the location of the data points and we derive an upper bound on the true error bars in terms of the contributions from individual data points which are themselves easily evaluated.
Resumo:
We explore the dependence of performance measures, such as the generalization error and generalization consistency, on the structure and the parameterization of the prior on `rules', instanced here by the noisy linear perceptron. Using a statistical mechanics framework, we show how one may assign values to the parameters of a model for a `rule' on the basis of data instancing the rule. Information about the data, such as input distribution, noise distribution and other `rule' characteristics may be embedded in the form of general gaussian priors for improving net performance. We examine explicitly two types of general gaussian priors which are useful in some simple cases. We calculate the optimal values for the parameters of these priors and show their effect in modifying the most probable, MAP, values for the rules.
Resumo:
This work introduces a new variational Bayes data assimilation method for the stochastic estimation of precipitation dynamics using radar observations for short term probabilistic forecasting (nowcasting). A previously developed spatial rainfall model based on the decomposition of the observed precipitation field using a basis function expansion captures the precipitation intensity from radar images as a set of ‘rain cells’. The prior distributions for the basis function parameters are carefully chosen to have a conjugate structure for the precipitation field model to allow a novel variational Bayes method to be applied to estimate the posterior distributions in closed form, based on solving an optimisation problem, in a spirit similar to 3D VAR analysis, but seeking approximations to the posterior distribution rather than simply the most probable state. A hierarchical Kalman filter is used to estimate the advection field based on the assimilated precipitation fields at two times. The model is applied to tracking precipitation dynamics in a realistic setting, using UK Met Office radar data from both a summer convective event and a winter frontal event. The performance of the model is assessed both traditionally and using probabilistic measures of fit based on ROC curves. The model is shown to provide very good assimilation characteristics, and promising forecast skill. Improvements to the forecasting scheme are discussed
Resumo:
Plasmid constitutions of Aeromonas salmonicida isolates were characterised by flat-bed and pulsed field gel electrophoresis. Resolution of plasmids by pulsed field gel electrophoresis was greater and more consistent than that achieved by flat-bed gel electrophoresis. The number of plasmids separated by pulsed field gel electrophoresis varied between A. salmonicida isolates, with five being the most common number present in the isolates used in this study. Plasmid profiles were diverse and the reproducibility of the distances migrated facilitated the use of principal components analysis for the characterisation of the isolates. Isolates were grouped according to the number of plasmids supported. Further principal components analysis of groups of isolates supporting five and seven plasmids showed a spatial separation of plasmids based upon distance migrated. Principal components analysis of plasmid profiles and antimicrobial minimum inhibitory concentrations could not be correlated suggesting that resistance to antimicrobial agents is not associated with either one plasmid or a particular plasmid constitution.
Resumo:
The thesis is divided into four chapters. They are: introduction, experimental, results and discussion about the free ligands and results and discussion about the complexes. The First Chapter, the introductory chapter, is a general introduction to the study of solid state reactions. The Second Chapter is devoted to the materials and experimental methods that have been used for carrying out tile experiments. TIle Third Chapter is concerned with the characterisations of free ligands (Picolinic acid, nicotinic acid, and isonicotinic acid) by using elemental analysis, IR spectra, X-ray diffraction, and mass spectra. Additionally, the thermal behaviour of free ligands in air has been studied by means of thermogravimetry (TG), derivative thermogravimetry (DTG), and differential scanning calorimetry (DSC) measurements. The behaviour of thermal decomposition of the three free ligands was not identical Finally, a computer program has been used for kinetic evaluation of non-isothermal differential scanning calorimetry data according to a composite and single heating rate methods in comparison with the methods due to Ozawa and Kissinger methods. The most probable reaction mechanism for the free ligands was the Avrami-Erofeev equation (A) that described the solid-state nucleation-growth mechanism. The activation parameters of the decomposition reaction for free ligands were calculated and the results of different methods of data analysis were compared and discussed. The Fourth Chapter, the final chapter, deals with the preparation of cobalt, nickel, and copper with mono-pyridine carboxylic acids in aqueous solution. The prepared complexes have been characterised by analyses, IR spectra, X-ray diffraction, magnetic moments, and electronic spectra. The stoichiometry of these compounds was ML2x(H20), (where M = metal ion, L = organic ligand and x = water molecule). The environments of cobalt, nickel, and copper nicotinates and the environments of cobalt and nickel picolinates were octahedral, whereas the environment of copper picolinate [Cu(PA)2] was tetragonal. However, the environments of cobalt, nickel, and copper isonicotinates were polymeric octahedral structures. The morphological changes that occurred throughout the decomposition were followed by SEM observation. TG, DTG, and DSC measurements have studied the thermal behaviour of the prepared complexes in air. During the degradation processes of the hydrated complexes, the crystallisation water molecules were lost in one or two steps. This was also followed by loss of organic ligands and the metal oxides remained. Comparison between the DTG temperatures of the first and second steps of the dehydration suggested that the water of crystallisation was more strongly bonded with anion in Ni(II) complexes than in the complexes of Co(II) and Cu(II). The intermediate products of decomposition were not identified. The most probable reaction mechanism for the prepared complexes was also Avrami-Erofeev equation (A) characteristic of solid-state nucleation-growth mechanism. The tempemture dependence of conductivity using direct current was determined for cobalt, nickel, Cl.nd copper isonicotinates. An activation energy (ΔΕ), the activation energy (ΔΕ ) were calculated.The ternperature and frequency dependence of conductivity, the frequency dependence of dielectric constant, and the dielectric loss for nickel isonicotinate were determined by using altemating current. The value of s paralneter and the value of'density of state [N(Ef)] were calculated. Keyword Thermal decomposition, kinetic, electrical conduclion, pyridine rnono~ carboxylic acid, cOlnplex, transition metal compJex.
Resumo:
Background - Vaccine development in the post-genomic era often begins with the in silico screening of genome information, with the most probable protective antigens being predicted rather than requiring causative microorganisms to be grown. Despite the obvious advantages of this approach – such as speed and cost efficiency – its success remains dependent on the accuracy of antigen prediction. Most approaches use sequence alignment to identify antigens. This is problematic for several reasons. Some proteins lack obvious sequence similarity, although they may share similar structures and biological properties. The antigenicity of a sequence may be encoded in a subtle and recondite manner not amendable to direct identification by sequence alignment. The discovery of truly novel antigens will be frustrated by their lack of similarity to antigens of known provenance. To overcome the limitations of alignment-dependent methods, we propose a new alignment-free approach for antigen prediction, which is based on auto cross covariance (ACC) transformation of protein sequences into uniform vectors of principal amino acid properties. Results - Bacterial, viral and tumour protein datasets were used to derive models for prediction of whole protein antigenicity. Every set consisted of 100 known antigens and 100 non-antigens. The derived models were tested by internal leave-one-out cross-validation and external validation using test sets. An additional five training sets for each class of antigens were used to test the stability of the discrimination between antigens and non-antigens. The models performed well in both validations showing prediction accuracy of 70% to 89%. The models were implemented in a server, which we call VaxiJen. Conclusion - VaxiJen is the first server for alignment-independent prediction of protective antigens. It was developed to allow antigen classification solely based on the physicochemical properties of proteins without recourse to sequence alignment. The server can be used on its own or in combination with alignment-based prediction methods.