972 resultados para Propositional calculus.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Architectural (bad) smells are design decisions found in software architectures that degrade the ability of systems to evolve. This paper presents an approach to verify that a software architecture is smellfree using the Archery architectural description language. The language provides a core for modelling software architectures and an extension for specifying constraints. The approach consists in precisely specifying architectural smells as constraints, and then verifying that software architectures do not satisfy any of them. The constraint language is based on a propositional modal logic with recursion that includes: a converse operator for relations among architectural concepts, graded modalities for describing the cardinality in such relations, and nominals referencing architectural elements. Four architectural smells illustrate the approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hybrid logics, which add to the modal description of transition structures the ability to refer to specific states, offer a generic framework to approach the specification and design of reconfigurable systems, i.e., systems with reconfiguration mechanisms governing the dynamic evolution of their execution configurations in response to both external stimuli or internal performance measures. A formal representation of such systems is through transition structures whose states correspond to the different configurations they may adopt. Therefore, each node is endowed with, for example, an algebra, or a first-order structure, to precisely characterise the semantics of the services provided in the corresponding configuration. This paper characterises equivalence and refinement for these sorts of models in a way which is independent of (or parametric on) whatever logic (propositional, equational, fuzzy, etc) is found appropriate to describe the local configurations. A Hennessy–Milner like theorem is proved for hybridised logics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main object of the present paper consists in giving formulas and methods which enable us to determine the minimum number of repetitions or of individuals necessary to garantee some extent the success of an experiment. The theoretical basis of all processes consists essentially in the following. Knowing the frequency of the desired p and of the non desired ovents q we may calculate the frequency of all possi- ble combinations, to be expected in n repetitions, by expanding the binomium (p-+q)n. Determining which of these combinations we want to avoid we calculate their total frequency, selecting the value of the exponent n of the binomium in such a way that this total frequency is equal or smaller than the accepted limit of precision n/pª{ 1/n1 (q/p)n + 1/(n-1)| (q/p)n-1 + 1/ 2!(n-2)| (q/p)n-2 + 1/3(n-3) (q/p)n-3... < Plim - -(1b) There does not exist an absolute limit of precision since its value depends not only upon psychological factors in our judgement, but is at the same sime a function of the number of repetitions For this reasen y have proposed (1,56) two relative values, one equal to 1-5n as the lowest value of probability and the other equal to 1-10n as the highest value of improbability, leaving between them what may be called the "region of doubt However these formulas cannot be applied in our case since this number n is just the unknown quantity. Thus we have to use, instead of the more exact values of these two formulas, the conventional limits of P.lim equal to 0,05 (Precision 5%), equal to 0,01 (Precision 1%, and to 0,001 (Precision P, 1%). The binominal formula as explained above (cf. formula 1, pg. 85), however is of rather limited applicability owing to the excessive calculus necessary, and we have thus to procure approximations as substitutes. We may use, without loss of precision, the following approximations: a) The normal or Gaussean distribution when the expected frequency p has any value between 0,1 and 0,9, and when n is at least superior to ten. b) The Poisson distribution when the expected frequecy p is smaller than 0,1. Tables V to VII show for some special cases that these approximations are very satisfactory. The praticai solution of the following problems, stated in the introduction can now be given: A) What is the minimum number of repititions necessary in order to avoid that any one of a treatments, varieties etc. may be accidentally always the best, on the best and second best, or the first, second, and third best or finally one of the n beat treatments, varieties etc. Using the first term of the binomium, we have the following equation for n: n = log Riim / log (m:) = log Riim / log.m - log a --------------(5) B) What is the minimun number of individuals necessary in 01der that a ceratin type, expected with the frequency p, may appaer at least in one, two, three or a=m+1 individuals. 1) For p between 0,1 and 0,9 and using the Gaussean approximation we have: on - ó. p (1-p) n - a -1.m b= δ. 1-p /p e c = m/p } -------------------(7) n = b + b² + 4 c/ 2 n´ = 1/p n cor = n + n' ---------- (8) We have to use the correction n' when p has a value between 0,25 and 0,75. The greek letters delta represents in the present esse the unilateral limits of the Gaussean distribution for the three conventional limits of precision : 1,64; 2,33; and 3,09 respectively. h we are only interested in having at least one individual, and m becomes equal to zero, the formula reduces to : c= m/p o para a = 1 a = { b + b²}² = b² = δ2 1- p /p }-----------------(9) n = 1/p n (cor) = n + n´ 2) If p is smaller than 0,1 we may use table 1 in order to find the mean m of a Poisson distribution and determine. n = m: p C) Which is the minimun number of individuals necessary for distinguishing two frequencies p1 and p2? 1) When pl and p2 are values between 0,1 and 0,9 we have: n = { δ p1 ( 1-pi) + p2) / p2 (1 - p2) n= 1/p1-p2 }------------ (13) n (cor) We have again to use the unilateral limits of the Gaussean distribution. The correction n' should be used if at least one of the valors pl or p2 has a value between 0,25 and 0,75. A more complicated formula may be used in cases where whe want to increase the precision : n (p1 - p2) δ { p1 (1- p2 ) / n= m δ = δ p1 ( 1 - p1) + p2 ( 1 - p2) c= m / p1 - p2 n = { b2 + 4 4 c }2 }--------- (14) n = 1/ p1 - p2 2) When both pl and p2 are smaller than 0,1 we determine the quocient (pl-r-p2) and procure the corresponding number m2 of a Poisson distribution in table 2. The value n is found by the equation : n = mg /p2 ------------- (15) D) What is the minimun number necessary for distinguishing three or more frequencies, p2 p1 p3. If the frequecies pl p2 p3 are values between 0,1 e 0,9 we have to solve the individual equations and sue the higest value of n thus determined : n 1.2 = {δ p1 (1 - p1) / p1 - p2 }² = Fiim n 1.2 = { δ p1 ( 1 - p1) + p1 ( 1 - p1) }² } -- (16) Delta represents now the bilateral limits of the : Gaussean distrioution : 1,96-2,58-3,29. 2) No table was prepared for the relatively rare cases of a comparison of threes or more frequencies below 0,1 and in such cases extremely high numbers would be required. E) A process is given which serves to solve two problemr of informatory nature : a) if a special type appears in n individuals with a frequency p(obs), what may be the corresponding ideal value of p(esp), or; b) if we study samples of n in diviuals and expect a certain type with a frequency p(esp) what may be the extreme limits of p(obs) in individual farmlies ? I.) If we are dealing with values between 0,1 and 0,9 we may use table 3. To solve the first question we select the respective horizontal line for p(obs) and determine which column corresponds to our value of n and find the respective value of p(esp) by interpolating between columns. In order to solve the second problem we start with the respective column for p(esp) and find the horizontal line for the given value of n either diretly or by approximation and by interpolation. 2) For frequencies smaller than 0,1 we have to use table 4 and transform the fractions p(esp) and p(obs) in numbers of Poisson series by multiplication with n. Tn order to solve the first broblem, we verify in which line the lower Poisson limit is equal to m(obs) and transform the corresponding value of m into frequecy p(esp) by dividing through n. The observed frequency may thus be a chance deviate of any value between 0,0... and the values given by dividing the value of m in the table by n. In the second case we transform first the expectation p(esp) into a value of m and procure in the horizontal line, corresponding to m(esp) the extreme values om m which than must be transformed, by dividing through n into values of p(obs). F) Partial and progressive tests may be recomended in all cases where there is lack of material or where the loss of time is less importent than the cost of large scale experiments since in many cases the minimun number necessary to garantee the results within the limits of precision is rather large. One should not forget that the minimun number really represents at the same time a maximun number, necessary only if one takes into consideration essentially the disfavorable variations, but smaller numbers may frequently already satisfactory results. For instance, by definition, we know that a frequecy of p means that we expect one individual in every total o(f1-p). If there were no chance variations, this number (1- p) will be suficient. and if there were favorable variations a smaller number still may yield one individual of the desired type. r.nus trusting to luck, one may start the experiment with numbers, smaller than the minimun calculated according to the formulas given above, and increase the total untill the desired result is obtained and this may well b ebefore the "minimum number" is reached. Some concrete examples of this partial or progressive procedure are given from our genetical experiments with maize.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present thesis is a contribution to the debate on the applicability of mathematics; it examines the interplay between mathematics and the world, using historical case studies. The first part of the thesis consists of four small case studies. In chapter 1, I criticize "ante rem structuralism", proposed by Stewart Shapiro, by showing that his so-called "finite cardinal structures" are in conflict with mathematical practice. In chapter 2, I discuss Leonhard Euler's solution to the Königsberg bridges problem. I propose interpreting Euler's solution both as an explanation within mathematics and as a scientific explanation. I put the insights from the historical case to work against recent philosophical accounts of the Königsberg case. In chapter 3, I analyze the predator-prey model, proposed by Lotka and Volterra. I extract some interesting philosophical lessons from Volterra's original account of the model, such as: Volterra's remarks on mathematical methodology; the relation between mathematics and idealization in the construction of the model; some relevant details in the derivation of the Third Law, and; notions of intervention that are motivated by one of Volterra's main mathematical tools, phase spaces. In chapter 4, I discuss scientific and mathematical attempts to explain the structure of the bee's honeycomb. In the first part, I discuss a candidate explanation, based on the mathematical Honeycomb Conjecture, presented in Lyon and Colyvan (2008). I argue that this explanation is not scientifically adequate. In the second part, I discuss other mathematical, physical and biological studies that could contribute to an explanation of the bee's honeycomb. The upshot is that most of the relevant mathematics is not yet sufficiently understood, and there is also an ongoing debate as to the biological details of the construction of the bee's honeycomb. The second part of the thesis is a bigger case study from physics: the genesis of GR. Chapter 5 is a short introduction to the history, physics and mathematics that is relevant to the genesis of general relativity (GR). Chapter 6 discusses the historical question as to what Marcel Grossmann contributed to the genesis of GR. I will examine the so-called "Entwurf" paper, an important joint publication by Einstein and Grossmann, containing the first tensorial formulation of GR. By comparing Grossmann's part with the mathematical theories he used, we can gain a better understanding of what is involved in the first steps of assimilating a mathematical theory to a physical question. In chapter 7, I introduce, and discuss, a recent account of the applicability of mathematics to the world, the Inferential Conception (IC), proposed by Bueno and Colyvan (2011). I give a short exposition of the IC, offer some critical remarks on the account, discuss potential philosophical objections, and I propose some extensions of the IC. In chapter 8, I put the Inferential Conception (IC) to work in the historical case study: the genesis of GR. I analyze three historical episodes, using the conceptual apparatus provided by the IC. In episode one, I investigate how the starting point of the application process, the "assumed structure", is chosen. Then I analyze two small application cycles that led to revisions of the initial assumed structure. In episode two, I examine how the application of "new" mathematics - the application of the Absolute Differential Calculus (ADC) to gravitational theory - meshes with the IC. In episode three, I take a closer look at two of Einstein's failed attempts to find a suitable differential operator for the field equations, and apply the conceptual tools provided by the IC so as to better understand why he erroneously rejected both the Ricci tensor and the November tensor in the Zurich Notebook.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we summarise some of our recent work on consumer behaviour, drawing on recent developments in behavioural economics, in which consumers are embedded in a social context, so their behaviour is shaped by their interactions with other consumers. For the purpose of this paper we also allow consumption to cause environmental damage. Analysing the social context of consumption naturally lends itself to the use of game theoretic tools, and indicates that we seek to develop links between economics and sociology rather than economics and psychology, which has been the more predominant field for work in behavioural economics. We shall be concerned with three sets of issues: conspicuous consumption, consumption norms and altruistic behaviour. Our aim is to show that building links between sociological and economic approaches to the study of consumer behaviour can lead to significant and surprising implications for conventional economic policy prescriptions, especially with respect to environmental policy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El desarrollo más reciente de los puentes arco ha llevado a una nueva tipología: los “puentes arco espaciales”. Se entiende por puente arco espacial todo puente arco en el que, por su configuración geométrica y estructural, las cargas gravitatorias generan esfuerzos no contenidos en el plano del arco. Por un lado, aparecen para satisfacer las necesidades funcionales cuando estructuras en arco resultan las más adecuadas para sostener tableros curvos y evitar así apoyos intermedios. Desde un punto de vista estético, surgen como demanda de los nuevos puentes en entornos urbanos, buscando, no sólo una forma cuidada, sino persiguiendo convertirse en emblemas de la ciudad a partir de la originalidad y la innovación. Su proyecto y construcción es posible gracias a las grandes posibilidades que ofrecen los nuevos métodos de cálculo y dibujo por ordenador, en los que, a través del incremento de memoria y rapidez, cada vez se emplean programas más completos y nuevas modelizaciones, más cercanas a la realidad. No menos importante es el desarrollo de los medios auxiliares de construcción y de las herramientas de CAD/CAM, que convierte en construibles por control numérico formas de manufactura impensables. Ello trasciende en infinitas posibilidades de diseño y estructura. Sin embargo, el diseño y construcción de estas nuevas tipologías no ha estado acompañado por el avance en el estado del conocimiento fundamentado en la investigación, ya que se han desarrollado pocos estudios que explican parcialmente la respuesta estructural de estos puentes. Existe, por lo tanto, la necesidad de profundizar en el estado del conocimiento y clarificar su respuesta estructural, así como de plantear, finalmente, criterios de diseño que sirvan de apoyo en las fases de concepción y de proyecto a estas nuevas tipologías.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we establish lower and upper Gaussian bounds for the probability density of the mild solution to the stochastic heat equation with multiplicative noise and in any space dimension. The driving perturbation is a Gaussian noise which is white in time with some spatially homogeneous covariance. These estimates are obtained using tools of the Malliavin calculus. The most challenging part is the lower bound, which is obtained by adapting a general method developed by Kohatsu-Higa to the underlying spatially homogeneous Gaussian setting. Both lower and upper estimates have the same form: a Gaussian density with a variance which is equal to that of the mild solution of the corresponding linear equation with additive noise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Projecte de recerca elaborat a partir d’una estada a la Università degli studi di Siena, Italy , entre 2007 i 2009. El projecte ha consistit en un estudi de la formalització lògica del raonament en presència de vaguetat amb els mètodes de la Lògica Algebraica i de la Teoria de la Prova. S'ha treballat fonamental en quatre direccions complementàries. En primer lloc, s'ha proposat un nou plantejament, més abstracte que el paradigma dominant fins ara, per l'estudi dels sistemes de lògica borrosa. Fins ara en l'estudi d'aquests sistemes l'atenció havia recaigut essencialment en l'obtenció de semàntiques basades en tnormes contínues (o almenys contínues per l'esquerra). En primer nivell de major abstracció hem estudiat les propietats de completesa de les lògiques borroses (tant proposicionals com de primer ordre) respecte de semàntiques definides sobre qualsevol cadena de valors de veritat, no necessàriament només sobre l'interval unitat dels nombres reals. A continuació, en un nivell encara més abstracte, s’ha pres l'anomenada jerarquia de Leibniz de la Lògica Algebraica Abstracta que classifica tots els sistemes lògics amb un bon comportament algebraic i s'ha expandit a una nova jerarquia (que anomenem implicacional) que permet definir noves classes de lògiques borroses que contenen quasi totes les conegudes fins ara. En segon lloc, s’ha continuat una línia d'investigació iniciada els darrers anys consistent en l'estudi de la veritat parcial com a noció sintàctica (és a dir, com a constants de veritat explícites en els sistemes de prova de les lògiques borroses). Per primer cop, s’ha considerat la semàntica racional per les lògiques proposicionals i la semàntica real i racional per les lògiques de primer ordre expandides amb constants. En tercer lloc, s’ha tractat el problema més fonamental del significat i la utilitat de les lògiques borroses com a modelitzadores de (part de) els fenòmens de la vaguetat en un darrer article de caràcter més filosòfic i divulgatiu, i en un altre més tècnic en què defensem la necessitat i presentem l'estat de l'art de l'estudi de les estructures algèbriques associades a les lògiques borroses. Finalment, s’ha dedicat la darrera part del projecte a l'estudi de la complexitat aritmètica de les lògiques borroses de primer ordre.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While mobile technologies can provide great personalized services for mobile users, they also threaten their privacy. Such personalization-privacy paradox are particularly salient for context aware technology based mobile applications where user's behaviors, movement and habits can be associated with a consumer's personal identity. In this thesis, I studied the privacy issues in the mobile context, particularly focus on an adaptive privacy management system design for context-aware mobile devices, and explore the role of personalization and control over user's personal data. This allowed me to make multiple contributions, both theoretical and practical. In the theoretical world, I propose and prototype an adaptive Single-Sign On solution that use user's context information to protect user's private information for smartphone. To validate this solution, I first proved that user's context is a unique user identifier and context awareness technology can increase user's perceived ease of use of the system and service provider's authentication security. I then followed a design science research paradigm and implemented this solution into a mobile application called "Privacy Manager". I evaluated the utility by several focus group interviews, and overall the proposed solution fulfilled the expected function and users expressed their intentions to use this application. To better understand the personalization-privacy paradox, I built on the theoretical foundations of privacy calculus and technology acceptance model to conceptualize the theory of users' mobile privacy management. I also examined the role of personalization and control ability on my model and how these two elements interact with privacy calculus and mobile technology model. In the practical realm, this thesis contributes to the understanding of the tradeoff between the benefit of personalized services and user's privacy concerns it may cause. By pointing out new opportunities to rethink how user's context information can protect private data, it also suggests new elements for privacy related business models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Actualment l'ús de la criptografia ha arribat a ser del tot generalitzat, tant en els processos de transmissió i intercanvi segur d'informació, com en l'emmagatzematge secret de dades. Es tracta d'una disciplina els fonaments teòrics de la qual són en l'Àlgebra i en el Càlcul de Probabilitats. La programació d'interfícies gràfiques s'ha realitzat en Java i amb la manipulació, tot i que molt elemental, de documents XML.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Es tracta d'un programa d'assessorament i intervenció psicopedagògica en una escola per a treballar el càlcul mental a primària, concretament al cicle mitjà, però també extensible a altres cicles, a partir de l'adaptació de diferents jocs de taula mitjançant la creació de tallers de matemàtiques i amb una pràctica basada en el constructivisme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Es defineix l'expansió general d'operadors com una combinació lineal de projectors i s'exposa la seva aplicació generalitzada al càlcul d'integrals moleculars. Com a exemple numèric, es fa l'aplicació al càlcul d'integrals de repulsió electrònica entre quatre funcions de tipus s centrades en punts diferents, i es mostren tant resultats del càlcul com la definició d'escalat respecte a un valor de referència, que facilitarà el procés d'optimització de l'expansió per uns paràmetres arbitraris. Es donen resultats ajustats al valor exacte

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El desenvolupament del càlcul diferencial i integral com a disciplina científica a Europa durant el segle XVIII no és un tema nou. Però s’ha acostumat a enfocar la visió d’aquesta formació molt sovint des del “centre” i a partir de les grans figures com Isaac Newton o Gottfried Wilhelm Leibniz. En el present treball el protagonista potser, per a molts, no és una figura de primera línia; Tomàs Cerdà, és un ensenyant a Barcelona i a Madrid durant la segona meitat del segle XVIII, que “tradueix” al castellà autors anglesos, però que amb la seva pràctica està realment introduint el nou càlcul a Espanya i donant, de fet, una orientació d’aquesta nova disciplina als seus deixebles. El com i per què Cerdà decideix quin serà el seu guia en la introducció del càlcul diferencial i integral i quines seran les seves pròpies aportacions en aquesta labor seran els temes centrals del nostre treball. La nostra tasca ha anat, així doncs, a entendre millor, el procés de divulgació del coneixement científic, veient-lo en tot moment com formant part activa del mateix procés de construcció d’aquest coneixement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A select-divide-and-conquer variational method to approximate configuration interaction (CI) is presented. Given an orthonormal set made up of occupied orbitals (Hartree-Fock or similar) and suitable correlation orbitals (natural or localized orbitals), a large N-electron target space S is split into subspaces S0,S1,S2,...,SR. S0, of dimension d0, contains all configurations K with attributes (energy contributions, etc.) above thresholds T0={T0egy, T0etc.}; the CI coefficients in S0 remain always free to vary. S1 accommodates KS with attributes above T1≤T0. An eigenproblem of dimension d0+d1 for S0+S 1 is solved first, after which the last d1 rows and columns are contracted into a single row and column, thus freezing the last d1 CI coefficients hereinafter. The process is repeated with successive Sj(j≥2) chosen so that corresponding CI matrices fit random access memory (RAM). Davidson's eigensolver is used R times. The final energy eigenvalue (lowest or excited one) is always above the corresponding exact eigenvalue in S. Threshold values {Tj;j=0, 1, 2,...,R} regulate accuracy; for large-dimensional S, high accuracy requires S 0+S1 to be solved outside RAM. From there on, however, usually a few Davidson iterations in RAM are needed for each step, so that Hamiltonian matrix-element evaluation becomes rate determining. One μhartree accuracy is achieved for an eigenproblem of order 24 × 106, involving 1.2 × 1012 nonzero matrix elements, and 8.4×109 Slater determinants

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The vibrational configuration interaction method used to obtain static vibrational (hyper)polarizabilities is extended to dynamic nonlinear optical properties in the infinite optical frequency approximation. Illustrative calculations are carried out on H2 O and N H3. The former molecule is weakly anharmonic while the latter contains a strongly anharmonic umbrella mode. The effect on vibrational (hyper)polarizabilities due to various truncations of the potential energy and property surfaces involved in the calculation are examined