987 resultados para Mathematical structure


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Extensions of the standard model with N Higgs doublets are simple extensions presenting a rich mathematical structure. An underlying Minkowski structure emerges from the study of both variable space and parameter space. The former can be completely parametrized in terms of two future lightlike Minkowski vectors with spatial parts forming an angle whose cosine is -(N-1)(-1). For the parameter space, the Minkowski parametrization enables one to impose sufficient conditions for bounded below potentials, characterize certain classes of local minima, and distinguish charge breaking vacua from neutral vacua. A particular class of neutral minima presents a degenerate mass spectrum for the physical charged Higgs bosons.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The statistical properties of trajectories of eigenvalues of Gaussian complex matrices whose Hermitian condition is progressively broken are investigated. It is shown how the ordering on the real axis of the real eigenvalues is reflected in the structure of the trajectories and also in the final distribution of the eigenvalues in the complex plane.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The objective of this work is to characterize the genome of the chromosome 1 of A.thaliana, a small flowering plants used as a model organism in studies of biology and genetics, on the basis of a recent mathematical model of the genetic code. I analyze and compare different portions of the genome: genes, exons, coding sequences (CDS), introns, long introns, intergenes, untranslated regions (UTR) and regulatory sequences. In order to accomplish the task, I transformed nucleotide sequences into binary sequences based on the definition of the three different dichotomic classes. The descriptive analysis of binary strings indicate the presence of regularities in each portion of the genome considered. In particular, there are remarkable differences between coding sequences (CDS and exons) and non-coding sequences, suggesting that the frame is important only for coding sequences and that dichotomic classes can be useful to recognize them. Then, I assessed the existence of short-range dependence between binary sequences computed on the basis of the different dichotomic classes. I used three different measures of dependence: the well-known chi-squared test and two indices derived from the concept of entropy i.e. Mutual Information (MI) and Sρ, a normalized version of the “Bhattacharya Hellinger Matusita distance”. The results show that there is a significant short-range dependence structure only for the coding sequences whose existence is a clue of an underlying error detection and correction mechanism. No doubt, further studies are needed in order to assess how the information carried by dichotomic classes could discriminate between coding and noncoding sequence and, therefore, contribute to unveil the role of the mathematical structure in error detection and correction mechanisms. Still, I have shown the potential of the approach presented for understanding the management of genetic information.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The study of operations on representations of objects is well documented in the realm of spatial engineering. However, the mathematical structure and formal proof of these operational phenomena are not thoroughly explored. Other works have often focused on query-based models that seek to order classes and instances of objects in the form of semantic hierarchies or graphs. In some models, nodes of graphs represent objects and are connected by edges that represent different types of coarsening operators. This work, however, studies how the coarsening operator "simplification" can manipulate partitions of finite sets, independent from objects and their attributes. Partitions that are "simplified first have a collection of elements filtered (removed), and then the remaining partition is amalgamated (some sub-collections are unified). Simplification has many interesting mathematical properties. A finite composition of simplifications can also be accomplished with some single simplification. Also, if one partition is a simplification of the other, the simplified partition is defined to be less than the other partition according to the simp relation. This relation is shown to be a partial-order relation based on simplification. Collections of partitions can not only be proven to have a partial- order structure, but also have a lattice structure and are complete. In regard to a geographic information system (GIs), partitions related to subsets of attribute domains for objects are called views. Objects belong to different views based whether or not their attribute values lie in the underlying view domain. Given a particular view, objects with their attribute n-tuple codings contained in the view are part of the actualization set on views, and objects are labeled according to the particular subset of the view in which their coding lies. Though the scope of the work does not mainly focus on queries related directly to geographic objects, it provides verification for the existence of particular views in a system with this underlying structure. Given a finite attribute domain, one can say with mathematical certainty that different views of objects are partially ordered by simplification, and every collection of views has a greatest lower bound and least upper bound, which provides the validity for exploring queries in this regard.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the great challenges of the scientific community on theories of genetic information, genetic communication and genetic coding is to determine a mathematical structure related to DNA sequences. In this paper we propose a model of an intra-cellular transmission system of genetic information similar to a model of a power and bandwidth efficient digital communication system in order to identify a mathematical structure in DNA sequences where such sequences are biologically relevant. The model of a transmission system of genetic information is concerned with the identification, reproduction and mathematical classification of the nucleotide sequence of single stranded DNA by the genetic encoder. Hence, a genetic encoder is devised where labelings and cyclic codes are established. The establishment of the algebraic structure of the corresponding codes alphabets, mappings, labelings, primitive polynomials (p(x)) and code generator polynomials (g(x)) are quite important in characterizing error-correcting codes subclasses of G-linear codes. These latter codes are useful for the identification, reproduction and mathematical classification of DNA sequences. The characterization of this model may contribute to the development of a methodology that can be applied in mutational analysis and polymorphisms, production of new drugs and genetic improvement, among other things, resulting in the reduction of time and laboratory costs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An important consideration in the development of mathematical models for dynamic simulation, is the identification of the appropriate mathematical structure. By building models with an efficient structure which is devoid of redundancy, it is possible to create simple, accurate and functional models. This leads not only to efficient simulation, but to a deeper understanding of the important dynamic relationships within the process. In this paper, a method is proposed for systematic model development for startup and shutdown simulation which is based on the identification of the essential process structure. The key tool in this analysis is the method of nonlinear perturbations for structural identification and model reduction. Starting from a detailed mathematical process description both singular and regular structural perturbations are detected. These techniques are then used to give insight into the system structure and where appropriate to eliminate superfluous model equations or reduce them to other forms. This process retains the ability to interpret the reduced order model in terms of the physico-chemical phenomena. Using this model reduction technique it is possible to attribute observable dynamics to particular unit operations within the process. This relationship then highlights the unit operations which must be accurately modelled in order to develop a robust plant model. The technique generates detailed insight into the dynamic structure of the models providing a basis for system re-design and dynamic analysis. The technique is illustrated on the modelling for an evaporator startup. Copyright (C) 1996 Elsevier Science Ltd

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this second paper, the three structural measures which have been developed are used in the modelling of a three stage centrifugal synthesis gas compressor. The goal of this case study is to determine the essential mathematical structure which must be incorporated into the compressor model to accurately model the shutdown of this system. A simple, accurate and functional model of the system is created via three structural measures. It was found that the model can be correctly reduced into its basic modes and that the order of the differential system can be reduced from 51(st) to 20(th). Of the 31 differential equational 21 reduce to algebraic relations, 8 become constants and 2 can be deleted thereby increasing the algebraic set from 70 to 91 equations. An interpretation is also obtained as to which physical phenomena are dominating the dynamics of the compressor add whether the compressor will enter surge during the shutdown. Comparisons of the reduced model performance against the full model are given, showing the accuracy and applicability of the approach. Copyright (C) 1996 Elsevier Science Ltd

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta tese se propôs investigar a lógica inferencial das ações e suas significações em situações que mobilizam as noções de composição probabilística e acaso, bem como o papel dos modelos de significação no funcionamento cognitivo de adultos. Participaram 12 estudantes adultos jovens da classe popular, voluntários, de ambos os sexos, de um curso técnico integrado ao Ensino Médio da Educação de Jovens e Adultos. Foram realizados três encontros, individualmente, com registro em áudio e planilha eletrônica, utilizando-se dois jogos, o Likid Gaz e o Lucky Cassino, do software Missão Cognição (Haddad-Zubel, Pinkas & Pécaut, 2006), e o jogo Soma dos Dados (Silva, Rossetti & Cristo, 2012). Os procedimentos da tarefa foram adaptados de Silva e Frezza (2011): 1) apresentação do jogo; 2) execução do jogo; 3) entrevista semiestruturada; 4) aplicação de três situações-problema com intervenção segundo o Método Clínico; 5) nova partida do jogo; e 6) realização de outras duas situações-problema sem intervenção do Método Clínico. Elaboraram-se níveis de análise heurística, compreensão dos jogos e modelos de significação a partir da identificação de particularidades de procedimentos e significações nos jogos. O primeiro estudo examinou as implicações dos modelos de significação e representações prévias no pensamento do adulto, considerando que o sujeito organiza suas representações ou esquemas prévios relativos a um objeto na forma de modelos de significação em função do grau de complexidade e novidade da tarefa e de sua estrutura lógico matemática, que evoluem por meio do processo de equilibração; para o que precisa da demanda a significar esse aspecto da 13 realidade. O segundo estudo investigou a noção de combinação deduzível evidenciada no jogo Likid Gaz, identificando o papel dos modelos de significação na escolha dos procedimentos, implicando na rejeição de condutas de sistematização ou enumeração. Houve predominância dos níveis iniciais de análise heurística do jogo. O terceiro estudo examinou a noção de probabilidade observada no jogo Lucky Cassino, no qual a maioria dos participantes teve um nível de compreensão do jogo intermediário, com maior diversidade de modelos de significação em relação aos outros jogos, embora com predominância dos mais elementares. A síntese das noções de combinação, probabilidade e acaso foi explorada no quarto estudo pelo jogo Soma dos Dados (Silva, Rossetti & Cristo, 2012), identificando-se que uma limitação para adequada compreensão das ligações imbricadas nessas noções é a implicação significante – se aleatório A, então indeterminado D (notação A  D), com construção de pseudonecessidades e pseudo-obrigações ou mesmo necessidades locais, generalizadas inapropriadamente. A resistência ou obstáculos do objeto deveria provocar perturbações, mas a estrutura cognitiva, o ambiente social e os modelos culturais, e a afetividade podem interferir nesse processo.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A personalização é um aspeto chave de uma interação homem-computador efetiva. Numa era em que existe uma abundância de informação e tantas pessoas a interagir com ela, de muitas maneiras, a capacidade de se ajustar aos seus utilizadores é crucial para qualquer sistema moderno. A criação de sistemas adaptáveis é um domínio bastante complexo que necessita de métodos muito específicos para ter sucesso. No entanto, nos dias de hoje ainda não existe um modelo ou arquitetura padrão para usar nos sistemas adaptativos modernos. A principal motivação desta tese é a proposta de uma arquitetura para modelação do utilizador que seja capaz de incorporar diferentes módulos necessários para criar um sistema com inteligência escalável com técnicas de modelação. Os módulos cooperam de forma a analisar os utilizadores e caracterizar o seu comportamento, usando essa informação para fornecer uma experiência de sistema customizada que irá aumentar não só a usabilidade do sistema mas também a produtividade e conhecimento do utilizador. A arquitetura proposta é constituída por três componentes: uma unidade de informação do utilizador, uma estrutura matemática capaz de classificar os utilizadores e a técnica a usar quando se adapta o conteúdo. A unidade de informação do utilizador é responsável por conhecer os vários tipos de indivíduos que podem usar o sistema, por capturar cada detalhe de interações relevantes entre si e os seus utilizadores e também contém a base de dados que guarda essa informação. A estrutura matemática é o classificador de utilizadores, e tem como tarefa a sua análise e classificação num de três perfis: iniciado, intermédio ou avançado. Tanto as redes de Bayes como as neuronais são utilizadas, e uma explicação de como as preparar e treinar para lidar com a informação do utilizador é apresentada. Com o perfil do utilizador definido torna-se necessária uma técnica para adaptar o conteúdo do sistema. Nesta proposta, uma abordagem de iniciativa mista é apresentada tendo como base a liberdade de tanto o utilizador como o sistema controlarem a comunicação entre si. A arquitetura proposta foi desenvolvida como parte integrante do projeto ADSyS - um sistema de escalonamento dinâmico - utilizado para resolver problemas de escalonamento sujeitos a eventos dinâmicos. Possui uma complexidade elevada mesmo para utilizadores frequentes, daí a necessidade de adaptar o seu conteúdo de forma a aumentar a sua usabilidade. Com o objetivo de avaliar as contribuições deste trabalho, um estudo computacional acerca do reconhecimento dos utilizadores foi desenvolvido, tendo por base duas sessões de avaliação de usabilidade com grupos de utilizadores distintos. Foi possível concluir acerca dos benefícios na utilização de técnicas de modelação do utilizador com a arquitetura proposta.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose to analyze shapes as “compositions” of distances in Aitchison geometry asan alternate and complementary tool to classical shape analysis, especially when sizeis non-informative.Shapes are typically described by the location of user-chosen landmarks. Howeverthe shape – considered as invariant under scaling, translation, mirroring and rotation– does not uniquely define the location of landmarks. A simple approach is to usedistances of landmarks instead of the locations of landmarks them self. Distances arepositive numbers defined up to joint scaling, a mathematical structure quite similar tocompositions. The shape fixes only ratios of distances. Perturbations correspond torelative changes of the size of subshapes and of aspect ratios. The power transformincreases the expression of the shape by increasing distance ratios. In analogy to thesubcompositional consistency, results should not depend too much on the choice ofdistances, because different subsets of the pairwise distances of landmarks uniquelydefine the shape.Various compositional analysis tools can be applied to sets of distances directly or afterminor modifications concerning the singularity of the covariance matrix and yield resultswith direct interpretations in terms of shape changes. The remaining problem isthat not all sets of distances correspond to a valid shape. Nevertheless interpolated orpredicted shapes can be backtransformated by multidimensional scaling (when all pairwisedistances are used) or free geodetic adjustment (when sufficiently many distancesare used)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we explore the sectoral and aggregate implications of some endogeneization rules (i.e. on value-added and final demand) which have been common in the Leontief model and have been recently proposed in the Ghosh model. We detect that these rules may give rise in both models to some allegedly pathological behavior in the sense that sectoral or aggregate output, very often, may not follow the logical and economically expected direct relationship with some underlying endogenous variables—namely, output and value-added in the Ghosh model and output and consumption in the Leontief model. Because of the common mathematical structure, whatever is or seems to be pathological in the Ghosh model also has a symmetric counterpart in the Leontief model. These would not be good news for the inner consistency of these linear models. To avoid such possible inconsistencies, we propose new and simple endogeneization rules that have a sound economic interpretation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Microarray data is frequently used to characterize the expression profile of a whole genome and to compare the characteristics of that genome under several conditions. Geneset analysis methods have been described previously to analyze the expression values of several genes related by known biological criteria (metabolic pathway, pathology signature, co-regulation by a common factor, etc.) at the same time and the cost of these methods allows for the use of more values to help discover the underlying biological mechanisms. Results: As several methods assume different null hypotheses, we propose to reformulate the main question that biologists seek to answer. To determine which genesets are associated with expression values that differ between two experiments, we focused on three ad hoc criteria: expression levels, the direction of individual gene expression changes (up or down regulation), and correlations between genes. We introduce the FAERI methodology, tailored from a two-way ANOVA to examine these criteria. The significance of the results was evaluated according to the self-contained null hypothesis, using label sampling or by inferring the null distribution from normally distributed random data. Evaluations performed on simulated data revealed that FAERI outperforms currently available methods for each type of set tested. We then applied the FAERI method to analyze three real-world datasets on hypoxia response. FAERI was able to detect more genesets than other methodologies, and the genesets selected were coherent with current knowledge of cellular response to hypoxia. Moreover, the genesets selected by FAERI were confirmed when the analysis was repeated on two additional related datasets. Conclusions: The expression values of genesets are associated with several biological effects. The underlying mathematical structure of the genesets allows for analysis of data from several genes at the same time. Focusing on expression levels, the direction of the expression changes, and correlations, we showed that two-step data reduction allowed us to significantly improve the performance of geneset analysis using a modified two-way ANOVA procedure, and to detect genesets that current methods fail to detect.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the disadvantages of old age is that there is more past than future: this,however, may be turned into an advantage if the wealth of experience and, hopefully,wisdom gained in the past can be reflected upon and throw some light on possiblefuture trends. To an extent, then, this talk is necessarily personal, certainly nostalgic,but also self critical and inquisitive about our understanding of the discipline ofstatistics. A number of almost philosophical themes will run through the talk: searchfor appropriate modelling in relation to the real problem envisaged, emphasis onsensible balances between simplicity and complexity, the relative roles of theory andpractice, the nature of communication of inferential ideas to the statistical layman, theinter-related roles of teaching, consultation and research. A list of keywords might be:identification of sample space and its mathematical structure, choices betweentransform and stay, the role of parametric modelling, the role of a sample spacemetric, the underused hypothesis lattice, the nature of compositional change,particularly in relation to the modelling of processes. While the main theme will berelevance to compositional data analysis we shall point to substantial implications forgeneral multivariate analysis arising from experience of the development ofcompositional data analysis…

Relevância:

60.00% 60.00%

Publicador:

Resumo:

It has been convincingly argued that computer simulation modeling differs from traditional science. If we understand simulation modeling as a new way of doing science, the manner in which scientists learn about the world through models must also be considered differently. This article examines how researchers learn about environmental processes through computer simulation modeling. Suggesting a conceptual framework anchored in a performative philosophical approach, we examine two modeling projects undertaken by research teams in England, both aiming to inform flood risk management. One of the modeling teams operated in the research wing of a consultancy firm, the other were university scientists taking part in an interdisciplinary project experimenting with public engagement. We found that in the first context the use of standardized software was critical to the process of improvisation, the obstacles emerging in the process concerned data and were resolved through exploiting affordances for generating, organizing, and combining scientific information in new ways. In the second context, an environmental competency group, obstacles were related to the computer program and affordances emerged in the combination of experience-based knowledge with the scientists' skill enabling a reconfiguration of the mathematical structure of the model, allowing the group to learn about local flooding.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results: Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions: Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task.