783 resultados para Fundamentals of computing theory
Resumo:
This article presents the principal results of the doctoral thesis “Isomerism as internal symmetry of molecules” by Valentin Vankov Iliev (Institute of Mathematics and Informatics), successfully defended before the Specialised Academic Council for Informatics and Mathematical Modelling on 15 December, 2008.
Resumo:
The problem of efficient computing of the affine vector operations (addition of two vectors and multiplication of a vector by a scalar over GF (q)), and also the weight of a given vector, is important for many problems in coding theory, cryptography, VLSI technology etc. In this paper we propose a new way of representing vectors over GF (3) and GF (4) and we describe an efficient performance of these affine operations. Computing weights of binary vectors is also discussed.
Resumo:
Since the 1950s, the theory of deterministic and nondeterministic finite automata (DFAs and NFAs, respectively) has been a cornerstone of theoretical computer science. In this dissertation, our main object of study is minimal NFAs. In contrast with minimal DFAs, minimal NFAs are computationally challenging: first, there can be more than one minimal NFA recognizing a given language; second, the problem of converting an NFA to a minimal equivalent NFA is NP-hard, even for NFAs over a unary alphabet. Our study is based on the development of two main theories, inductive bases and partials, which in combination form the foundation for an incremental algorithm, ibas, to find minimal NFAs. An inductive basis is a collection of languages with the property that it can generate (through union) each of the left quotients of its elements. We prove a fundamental characterization theorem which says that a language can be recognized by an n-state NFA if and only if it can be generated by an n-element inductive basis. A partial is an incompletely-specified language. We say that an NFA recognizes a partial if its language extends the partial, meaning that the NFA’s behavior is unconstrained on unspecified strings; it follows that a minimal NFA for a partial is also minimal for its language. We therefore direct our attention to minimal NFAs recognizing a given partial. Combining inductive bases and partials, we generalize our characterization theorem, showing that a partial can be recognized by an n-state NFA if and only if it can be generated by an n-element partial inductive basis. We apply our theory to develop and implement ibas, an incremental algorithm that finds minimal partial inductive bases generating a given partial. In the case of unary languages, ibas can often find minimal NFAs of up to 10 states in about an hour of computing time; with brute-force search this would require many trillions of years.
Resumo:
Since the 1950s, the theory of deterministic and nondeterministic finite automata (DFAs and NFAs, respectively) has been a cornerstone of theoretical computer science. In this dissertation, our main object of study is minimal NFAs. In contrast with minimal DFAs, minimal NFAs are computationally challenging: first, there can be more than one minimal NFA recognizing a given language; second, the problem of converting an NFA to a minimal equivalent NFA is NP-hard, even for NFAs over a unary alphabet. Our study is based on the development of two main theories, inductive bases and partials, which in combination form the foundation for an incremental algorithm, ibas, to find minimal NFAs. An inductive basis is a collection of languages with the property that it can generate (through union) each of the left quotients of its elements. We prove a fundamental characterization theorem which says that a language can be recognized by an n-state NFA if and only if it can be generated by an n-element inductive basis. A partial is an incompletely-specified language. We say that an NFA recognizes a partial if its language extends the partial, meaning that the NFA's behavior is unconstrained on unspecified strings; it follows that a minimal NFA for a partial is also minimal for its language. We therefore direct our attention to minimal NFAs recognizing a given partial. Combining inductive bases and partials, we generalize our characterization theorem, showing that a partial can be recognized by an n-state NFA if and only if it can be generated by an n-element partial inductive basis. We apply our theory to develop and implement ibas, an incremental algorithm that finds minimal partial inductive bases generating a given partial. In the case of unary languages, ibas can often find minimal NFAs of up to 10 states in about an hour of computing time; with brute-force search this would require many trillions of years.
Resumo:
Other
Resumo:
Purpose: The purpose of the research described in this paper is to disentangle the rhetoric from the reality in relation to supply chain management (SCM) adoption in practice. There is significant evidence of a divergence between theory and practice in the field of SCM. Research Approach: The authors’ review of the extant SCM literature highlighted a lack of replication studies in SCM, leading to the concept of refined replication being developed. The authors conducted a refined replication of the work of Sweeney et al. (2015) where a new SCM definitional construct – the Four Fundamentals – was proposed. The work presented in this article refines the previous study but adopts the same three-phase approach: focussed interviews, a questionnaire survey, and focus groups. This article covers the second phase of the refined replication study and describes an integrated research design of a questionnaire research to be undertaken in Britain. Findings and Originality: The article presents an integrated research design of a questionnaire research with emphases on the refined replication of previous work of Sweeney et al. (2015) carried out in Ireland and adapting it to the British context. Research Impact: The authors introduce the concept of refined replication in SCM research. This allows previous research to be built upon in order to test understanding of SCM theory and its practical implementation - based on the Four Fundamentals construct - among SCM professionals in Britain. Practical Impact: The article presents the integrated research design of a questionnaire research that may be used in similar studies.
Resumo:
This dissertation demonstrates an explanation of damage and reliability of critical components and structures within the second law of thermodynamics. The approach relies on the fundamentals of irreversible thermodynamics, specifically the concept of entropy generation due to materials degradation as an index of damage. All failure mechanisms that cause degradation, damage accumulation and ultimate failure share a common feature, namely energy dissipation. Energy dissipation, as a fundamental measure for irreversibility in a thermodynamic treatment of non-equilibrium processes, leads to and can be expressed in terms of entropy generation. The dissertation proposes a theory of damage by relating entropy generation to energy dissipation via generalized thermodynamic forces and thermodynamic fluxes that formally describes the resulting damage. Following the proposed theory of entropic damage, an approach to reliability and integrity characterization based on thermodynamic entropy is discussed. It is shown that the variability in the amount of the thermodynamic-based damage and uncertainties about the parameters of a distribution model describing the variability, leads to a more consistent and broader definition of the well know time-to-failure distribution in reliability engineering. As such it has been shown that the reliability function can be derived from the thermodynamic laws rather than estimated from the observed failure histories. Furthermore, using the superior advantages of the use of entropy generation and accumulation as a damage index in comparison to common observable markers of damage such as crack size, a method is proposed to explain the prognostics and health management (PHM) in terms of the entropic damage. The proposed entropic-based damage theory to reliability and integrity is then demonstrated through experimental validation. Using this theorem, the corrosion-fatigue entropy generation function is derived, evaluated and employed for structural integrity, reliability assessment and remaining useful life (RUL) prediction of Aluminum 7075-T651 specimens tested.
Resumo:
As many countries are moving toward water sector reforms, practical issues of how water management institutions can better effect allocation, regulation, and enforcement of water rights have emerged. The problem of nonavailability of water to tailenders on an irrigation system in developing countries, due to unlicensed upstream diversions is well documented. The reliability of access or equivalently the uncertainty associated with water availability at their diversion point becomes a parameter that is likely to influence the application by users for water licenses, as well as their willingness to pay for licensed use. The ability of a water agency to reduce this uncertainty through effective water rights enforcement is related to the fiscal ability of the agency to monitor and enforce licensed use. In this paper, this interplay across the users and the agency is explored, considering the hydraulic structure or sequence of water use and parameters that define the users and the agency`s economics. The potential for free rider behavior by the users, as well as their proposals for licensed use are derived conditional on this setting. The analyses presented are developed in the framework of the theory of ""Law and Economics,`` with user interactions modeled as a game theoretic enterprise. The state of Ceara, Brazil, is used loosely as an example setting, with parameter values for the experiments indexed to be approximately those relevant for current decisions. The potential for using the ideas in participatory decision making is discussed. This paper is an initial attempt to develop a conceptual framework for analyzing such situations but with a focus on the reservoir-canal system water rights enforcement.
Resumo:
This paper presents a personal view of the interaction between the analysis of choice under uncertainty and the analysis of production under uncertainty. Interest in the foundations of the theory of choice under uncertainty was stimulated by applications of expected utility theory such as the Sandmo model of production under uncertainty. This interest led to the development of generalized models including rank-dependent expected utility theory. In turn, the development of generalized expected utility models raised the question of whether such models could be used in the analysis of applied problems such as those involving production under uncertainty. Finally, the revival of the state-contingent approach led to the recognition of a fundamental duality between choice problems and production problems.
Resumo:
This paper analyses the applicability of the main enterprise internationalization theories to the entry of the multinational corporations into Brazil, throughout five phases of Brazilian economy, from 1850 to nowadays. It seeks to verify the explanation power of each theory over the FDI flows in Brazil. It concludes that there is a contingency relation between the theories and the phases of the economy, and. it shows such relationship in a table. In addition, it concludes that the most powerful theory along the researched period was Dunning`s eclectic paradigm, mainly due to the Localization considerations. Theoretical propositions are put forward as a contribution to future research.
Resumo:
Discussion opposing the Theory of the Firm to the Theory of Stakeholders are contemporaneous and polemical. One focal point of such debates refers to which objective-function companies, should choose, whether that of the shareholders or that of the stakeholders, and whether it is possible to opt for both simultaneously. Several empirical studies. have attempted-to test a possible correlation between both functions, and there has not been any consensus-so far. The objective of the present research is to examine a gap in such discussions: is there (or not) a subordination of the stakeholders` objective-function to that of the shareholders? The research is empirical,and analytical and employs quantitative methods. Hypotheses were tested and data analyzed by using non-parametrical (chi-square test) and parametrical procedures (frequency. correlation `coefficient). Secondary data was collected from he Economitica database and from the Brazilian Institute of Social and-Economic Analyses (IBASE) website, relative to public companies that have published their Social Balance Statements following the IBASE model from 1999 to 2006, whose sample amounted to 65 companies; In order to assess the objective-function of shareholders a proxy was created based on the following three indices: ROE (return on equity), EnterpriseValue and Tobin`s Q. In order to assess the objective-function of stakeholders a proxy was created by employing the following IBASE social balance indices: internal ones (ISI), external ones (ISE), and environmental ones (IAM). The results have shown no evidence of subordination of stakeholders` objective-function to that of the shareholders in analyzed companies, negating initial expectations and calling for deeper investigation of results. Its main conclusion, which states that the attempted subordination does not take place, is limited to the sample herein investigated and calls for ongoing research aiming at improvements which may lead to sample enlargement and, as a consequence, may make feasible the application of other statistical techniques which may yield a more thorough, analysis of the studied phenomehon.
Resumo:
Using a random sample of university students to test general strain theory (GST), this study expanded on previous tests of strain theory in two ways. First, situational anger was measured, a construct that had not been used thus far in assessments of general strain. In addition, this research examined the role of social support networks as a conditioning influence on the effects of strain and anger on intentions to commit three types of criminal behavior (serious assault, shoplifting, and driving under the influence of alcohol [DUI]). The results provided mixed support for GST. While the link between anger and crime was confirmed, the nature of that relationship in some cases ran counter to the theory. Moreover, the evidence indicated that the role of social support networks was complex, and varied as a conditioning influence on intentions to engage in criminal activities. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The quasi mode theory of macroscopic quantization in quantum optics and cavity QED developed by Dalton, Barnett and Knight is generalized. This generalization allows for cases in which two or more quasi permittivities, along with their associated mode functions, are needed to describe the classical optics device. It brings problems such as reflection and refraction at a dielectric boundary, the linear coupler, and the coupling of two optical cavities within the scope of the theory. For the most part, the results that are obtained here are simple generalizations of those obtained in previous work. However the coupling constants, which are of great importance in applications of the theory, are shown to contain significant additional terms which cannot be 'guessed' from the simpler forms. The expressions for the coupling constants suggest that the critical factor in determining the strength of coupling between a pair of quasi modes is their degree of spatial overlap. In an accompanying paper a fully quantum theoretic derivation of the laws of reflection and refraction at a boundary is given as an illustration of the generalized theory. The quasi mode picture of this process involves the annihilation of a photon travelling in the incident region quasi mode, and the subsequent creation of a photon in either the incident region or transmitted region quasi modes.
Resumo:
This paper deals with non-Markovian behavior in atomic systems coupled to a structured reservoir of quantum electromagnetic field modes, with particular relevance to atoms interacting with the field in high-Q cavities or photonic band-gap materials. In cases such as the former, we show that the pseudomode theory for single-quantum reservoir excitations can be obtained by applying the Fano diagonalization method to a system in which the atomic transitions are coupled to a discrete set of (cavity) quasimodes, which in turn are coupled to a continuum set of (external) quasimodes with slowly varying coupling constants and continuum mode density. Each pseudomode can be identified with a discrete quasimode, which gives structure to the actual reservoir of true modes via the expressions for the equivalent atom-true mode coupling constants. The quasimode theory enables cases of multiple excitation of the reservoir to now be treated via Markovian master equations for the atom-discrete quasimode system. Applications of the theory to one, two, and many discrete quasimodes are made. For a simple photonic band-gap model, where the reservoir structure is associated with the true mode density rather than the coupling constants, the single quantum excitation case appears to be equivalent to a case with two discrete quasimodes.
Resumo:
A package of B-spline finite strip models is developed for the linear analysis of piezolaminated plates and shells. This package is associated to a global optimization technique in order to enhance the performance of these types of structures, subjected to various types of objective functions and/or constraints, with discrete and continuous design variables. The models considered are based on a higher-order displacement field and one can apply them to the static, free vibration and buckling analyses of laminated adaptive structures with arbitrary lay-ups, loading and boundary conditions. Genetic algorithms, with either binary or floating point encoding of design variables, were considered to find optimal locations of piezoelectric actuators as well as to determine the best voltages applied to them in order to obtain a desired structure shape. These models provide an overall economy of computing effort for static and vibration problems.