964 resultados para BORSUK-ULAM THEOREM


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Today's networked systems are becoming increasingly complex and diverse. The current simulation and runtime verification techniques do not provide support for developing such systems efficiently; moreover, the reliability of the simulated/verified systems is not thoroughly ensured. To address these challenges, the use of formal techniques to reason about network system development is growing, while at the same time, the mathematical background necessary for using formal techniques is a barrier for network designers to efficiently employ them. Thus, these techniques are not vastly used for developing networked systems. The objective of this thesis is to propose formal approaches for the development of reliable networked systems, by taking efficiency into account. With respect to reliability, we propose the architectural development of correct-by-construction networked system models. With respect to efficiency, we propose reusable network architectures as well as network development. At the core of our development methodology, we employ the abstraction and refinement techniques for the development and analysis of networked systems. We evaluate our proposal by employing the proposed architectures to a pervasive class of dynamic networks, i.e., wireless sensor network architectures as well as to a pervasive class of static networks, i.e., network-on-chip architectures. The ultimate goal of our research is to put forward the idea of building libraries of pre-proved rules for the efficient modelling, development, and analysis of networked systems. We take into account both qualitative and quantitative analysis of networks via varied formal tool support, using a theorem prover the Rodin platform and a statistical model checker the SMC-Uppaal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

After introducing the no-cloning theorem and the most common forms of approximate quantum cloning, universal quantum cloning is considered in detail. The connections it has with universal NOT-gate, quantum cryptography and state estimation are presented and briefly discussed. The state estimation connection is used to show that the amount of extractable classical information and total Bloch vector length are conserved in universal quantum cloning. The 1  2 qubit cloner is also shown to obey a complementarity relation between local and nonlocal information. These are interpreted to be a consequence of the conservation of total information in cloning. Finally, the performance of the 1  M cloning network discovered by Bužek, Hillery and Knight is studied in the presence of decoherence using the Barenco et al. approach where random phase fluctuations are attached to 2-qubit gates. The expression for average fidelity is calculated for three cases and it is found to depend on the optimal fidelity and the average of the phase fluctuations in a specific way. It is conjectured to be the form of the average fidelity in the general case. While the cloning network is found to be rather robust, it is nevertheless argued that the scalability of the quantum network implementation is poor by studying the effect of decoherence during the preparation of the initial state of the cloning machine in the 1 ! 2 case and observing that the loss in average fidelity can be large. This affirms the result by Maruyama and Knight, who reached the same conclusion in a slightly different manner.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The superconducting gap is a basic character of a superconductor. While the cuprates and conventional phonon-mediated superconductors are characterized by distinct d- and s-wave pairing symmetries with nodal and nodeless gap distributions respectively, the superconducting gap distributions in iron-based superconductors are rather diversified. While nodeless gap distributions have been directly observed in Ba1–xKxFe2As2, BaFe2–xCoxAs2, LiFeAs, KxFe2–ySe2, and FeTe1–xSex, the signatures of a nodal superconducting gap have been reported in LaOFeP, LiFeP, FeSe, KFe2As2, BaFe2–xRuxAs2, and BaFe2(As1–xPx)2. Due to the multiplicity of the Fermi surface in these compounds s± and d pairing states can be both nodeless and nodal. A nontrivial orbital structure of the order parameter, in particular the presence of the gap nodes, leads to effects in which the disorder is much richer in dx2–y2-wave superconductors than in conventional materials. In contrast to the s-wave case, the Anderson theorem does not work, and nonmagnetic impurities exhibit a strong pair-breaking influence. In addition, a finite concentration of disorder produces a nonzero density of quasiparticle states at zero energy, which results in a considerable modification of the thermodynamic and transport properties at low temperatures. The influence of order parameter symmetry on the vortex core structure in iron-based pnictide and chalcogenide superconductors has been investigated in the framework of quasiclassical Eilenberger equations. The main results of the thesis are as follows. The vortex core characteristics, such as, cutoff parameter, ξh, and core size, ξ2, determined as the distance at which density of the vortex supercurrent reaches its maximum, are calculated in wide temperature, impurity scattering rate, and magnetic field ranges. The cutoff parameter, ξh(B; T; Г), determines the form factor of the flux-line lattice, which can be obtained in _SR, NMR, and SANS experiments. A comparison among the applied pairing symmetries is done. In contrast to s-wave systems, in dx2–y2-wave superconductors, ξh/ξc2 always increases with the scattering rate Г. Field dependence of the cutoff parameter affects strongly on the second moment of the magnetic field distributions, resulting in a significant difference with nonlocal London theory. It is found that normalized ξ2/ξc2(B/Bc2) dependence is increasing with pair-breaking impurity scattering (interband scattering for s±-wave and intraband impurity scattering for d-wave superconductors). Here, ξc2 is the Ginzburg-Landau coherence length determined from the upper critical field Bc2 = Φ0/2πξ2 c2, where Φ0 is a flux quantum. Two types of ξ2/ξc2 magnetic field dependences are obtained for s± superconductors. It has a minimum at low temperatures and small impurity scattering transforming in monotonously decreasing function at strong scattering and high temperatures. The second kind of this dependence has been also found for d-wave superconductors at intermediate and high temperatures. In contrast, impurity scattering results in decreasing of ξ2/ξc2(B/Bc2) dependence in s++ superconductors. A reasonable agreement between calculated ξh/ξc2 values and those obtained experimentally in nonstoichiometric BaFe2–xCoxAs2 (μSR) and stoichiometric LiFeAs (SANS) was found. The values of ξh/ξc2 are much less than one in case of the first compound and much more than one for the other compound. This is explained by different influence of two factors: the value of impurity scattering rate and pairing symmetry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we examine four well-known and traditional concepts of combinatorics on words. However the contexts in which these topics are treated are not the traditional ones. More precisely, the question of avoidability is asked, for example, in terms of k-abelian squares. Two words are said to be k-abelian equivalent if they have the same number of occurrences of each factor up to length k. Consequently, k-abelian equivalence can be seen as a sharpening of abelian equivalence. This fairly new concept is discussed broader than the other topics of this thesis. The second main subject concerns the defect property. The defect theorem is a well-known result for words. We will analyze the property, for example, among the sets of 2-dimensional words, i.e., polyominoes composed of labelled unit squares. From the defect effect we move to equations. We will use a special way to define a product operation for words and then solve a few basic equations over constructed partial semigroup. We will also consider the satisfiability question and the compactness property with respect to this kind of equations. The final topic of the thesis deals with palindromes. Some finite words, including all binary words, are uniquely determined up to word isomorphism by the position and length of some of its palindromic factors. The famous Thue-Morse word has the property that for each positive integer n, there exists a factor which cannot be generated by fewer than n palindromes. We prove that in general, every non ultimately periodic word contains a factor which cannot be generated by fewer than 3 palindromes, and we obtain a classification of those binary words each of whose factors are generated by at most 3 palindromes. Surprisingly these words are related to another much studied set of words, Sturmian words.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hepatitis E virus (HEV) is classified within the family Hepeviridae, genus Hepevirus. HEV genotype 3 (Gt3) infections are endemic in pigs in Western Europe and in North and South America and cause zoonotic infections in humans. Several serological assays to detect HEV antibodies in pigs have been developed, at first mainly based on HEV genotype 1 (Gt1) antigens. To develop a sensitive HEV Gt3 ELISA, a recombinant baculovirus expression product of HEV Gt3 open reading frame-2 was produced and coated onto polystyrene ELISA plates. After incubation of porcine sera, bound HEV antibodies were detected with anti-porcine anti-IgG and anti-IgM conjugates. For primary estimation of sensitivity and specificity of the assay, sets of sera were used from pigs experimentally infected with HEV Gt3. For further validation of the assay and to set the cutoff value, a batch of 1100 pig sera was used. All pig sera were tested using the developed HEV Gt3 assay and two other serologic assays based on HEV Gt1 antigens. Since there is no gold standard available for HEV antibody testing, further validation and a definite setting of the cutoff of the developed HEV Gt3 assay were performed using a statistical approach based on Bayes' theorem. The developed and validated HEV antibody assay showed effective detection of HEV-specific antibodies. This assay can contribute to an improved detection of HEV antibodies and enable more reliable estimates of the prevalence of HEV Gt3 in swine in different regions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Resilience is the property of a system to remain trustworthy despite changes. Changes of a different nature, whether due to failures of system components or varying operational conditions, significantly increase the complexity of system development. Therefore, advanced development technologies are required to build robust and flexible system architectures capable of adapting to such changes. Moreover, powerful quantitative techniques are needed to assess the impact of these changes on various system characteristics. Architectural flexibility is achieved by embedding into the system design the mechanisms for identifying changes and reacting on them. Hence a resilient system should have both advanced monitoring and error detection capabilities to recognise changes as well as sophisticated reconfiguration mechanisms to adapt to them. The aim of such reconfiguration is to ensure that the system stays operational, i.e., remains capable of achieving its goals. Design, verification and assessment of the system reconfiguration mechanisms is a challenging and error prone engineering task. In this thesis, we propose and validate a formal framework for development and assessment of resilient systems. Such a framework provides us with the means to specify and verify complex component interactions, model their cooperative behaviour in achieving system goals, and analyse the chosen reconfiguration strategies. Due to the variety of properties to be analysed, such a framework should have an integrated nature. To ensure the system functional correctness, it should rely on formal modelling and verification, while, to assess the impact of changes on such properties as performance and reliability, it should be combined with quantitative analysis. To ensure scalability of the proposed framework, we choose Event-B as the basis for reasoning about functional correctness. Event-B is a statebased formal approach that promotes the correct-by-construction development paradigm and formal verification by theorem proving. Event-B has a mature industrial-strength tool support { the Rodin platform. Proof-based verification as well as the reliance on abstraction and decomposition adopted in Event-B provides the designers with a powerful support for the development of complex systems. Moreover, the top-down system development by refinement allows the developers to explicitly express and verify critical system-level properties. Besides ensuring functional correctness, to achieve resilience we also need to analyse a number of non-functional characteristics, such as reliability and performance. Therefore, in this thesis we also demonstrate how formal development in Event-B can be combined with quantitative analysis. Namely, we experiment with integration of such techniques as probabilistic model checking in PRISM and discrete-event simulation in SimPy with formal development in Event-B. Such an integration allows us to assess how changes and di erent recon guration strategies a ect the overall system resilience. The approach proposed in this thesis is validated by a number of case studies from such areas as robotics, space, healthcare and cloud domain.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective and originality of this paper lie in identifying Stiglitz's main theoretical contributions to Financial Economics and in briefly portraying the contemporary economic thought out of which these contributions emerged as well as in suggesting their connections with the subsequent economic thought. Grounded on a detailed analysis of Stiglitz's works on finance, his most important theoretical findings are singled out and gathered into four issues: (1) the conditions under which the Modigliani-Miller theorem is valid; (2) the inconsistency inherent to the efficient market hypothesis; (3) the microeconomic effects of asymmetrical information in financial markets; and (4) its real macroeconomic effects. In all of these topics, the focal point of Stiglitz's theoretical research is the unrealistic underpinnings on which the Arrow-Debreu competitive equilibrium model relies. It is also emphasised that this same perspective he coherently followed to construct a fully-fledged theoretical framework would be preserved in his empirical investigations, notably about developing countries, on which he has concentrated effort since the beginnings of the nineties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we extend Kaldor’s Neo-Pasinetti theorem to the scope of budgetary interventions based on political orientations. First, we take into account a system of taxes and expenditures. Second, we introduce different reaction functions for public spending showing the political role of the State in Cambridge theory of distribution. It turns out that the validity of Kaldorian results depends on the political orientation adopted by government, which diminishes the range of application of the Neo-Pasinetti theorem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Confocal and two-photon microcopy have become essential tools in biological research and today many investigations are not possible without their help. The valuable advantage that these two techniques offer is the ability of optical sectioning. Optical sectioning makes it possible to obtain 3D visuahzation of the structiu-es, and hence, valuable information of the structural relationships, the geometrical, and the morphological aspects of the specimen. The achievable lateral and axial resolutions by confocal and two-photon microscopy, similar to other optical imaging systems, are both defined by the diffraction theorem. Any aberration and imperfection present during the imaging results in broadening of the calculated theoretical resolution, blurring, geometrical distortions in the acquired images that interfere with the analysis of the structures, and lower the collected fluorescence from the specimen. The aberrations may have different causes and they can be classified by their sources such as specimen-induced aberrations, optics-induced aberrations, illumination aberrations, and misalignment aberrations. This thesis presents an investigation and study of image enhancement. The goal of this thesis was approached in two different directions. Initially, we investigated the sources of the imperfections. We propose methods to eliminate or minimize aberrations introduced during the image acquisition by optimizing the acquisition conditions. The impact on the resolution as a result of using a coverslip the thickness of which is mismatched with the one that the objective lens is designed for was shown and a novel technique was introduced in order to define the proper value on the correction collar of the lens. The amoimt of spherical aberration with regard to t he numerical aperture of the objective lens was investigated and it was shown that, based on the purpose of our imaging tasks, different numerical apertures must be used. The deformed beam cross section of the single-photon excitation source was corrected and the enhancement of the resolution and image quaUty was shown. Furthermore, the dependency of the scattered light on the excitation wavelength was shown empirically. In the second part, we continued the study of the image enhancement process by deconvolution techniques. Although deconvolution algorithms are used widely to improve the quality of the images, how well a deconvolution algorithm responds highly depends on the point spread function (PSF) of the imaging system applied to the algorithm and the level of its accuracy. We investigated approaches that can be done in order to obtain more precise PSF. Novel methods to improve the pattern of the PSF and reduce the noise are proposed. Furthermore, multiple soiu'ces to extract the PSFs of the imaging system are introduced and the empirical deconvolution results by using each of these PSFs are compared together. The results confirm that a greater improvement attained by applying the in situ PSF during the deconvolution process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The frequency dependence of the electron-spin fluctuation spectrum, P(Q), is calculated in the finite bandwidth model. We find that for Pd, which has a nearly full d-band, the magnitude, the range, and the peak frequency of P(Q) are greatly reduced from those in the standard spin fluctuation theory. The electron self-energy due to spin fluctuations is calculated within the finite bandwidth model. Vertex corrections are examined, and we find that Migdal's theorem is valid for spin fluctuations in the nearly full band. The conductance of a normal metal-insulator-normal metal tunnel junction is examined when spin fluctuations are present in one electrode. We find that for the nearly full band, the momentum independent self-energy due to spin fluctuations enters the expression for the tunneling conductance with approximately the same weight as the self-energy due to phonons. The effect of spin fluctuations on the tunneling conductance is slight within the finite bandwidth model for Pd. The effect of spin fluctuations on the tunneling conductance of a metal with a less full d-band than Pd may be more pronounced. However, in this case the tunneling conductance is not simply proportional to the self-energy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract: Root and root finding are concepts familiar to most branches of mathematics. In graph theory, H is a square root of G and G is the square of H if two vertices x,y have an edge in G if and only if x,y are of distance at most two in H. Graph square is a basic operation with a number of results about its properties in the literature. We study the characterization and recognition problems of graph powers. There are algorithmic and computational approaches to answer the decision problem of whether a given graph is a certain power of any graph. There are polynomial time algorithms to solve this problem for square of graphs with girth at least six while the NP-completeness is proven for square of graphs with girth at most four. The girth-parameterized problem of root fining has been open in the case of square of graphs with girth five. We settle the conjecture that recognition of square of graphs with girth 5 is NP-complete. This result is providing the complete dichotomy theorem for square root finding problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

According to the List Colouring Conjecture, if G is a multigraph then χ' (G)=χl' (G) . In this thesis, we discuss a relaxed version of this conjecture that every simple graph G is edge-(∆ + 1)-choosable as by Vizing’s Theorem ∆(G) ≤χ' (G)≤∆(G) + 1. We prove that if G is a planar graph without 7-cycles with ∆(G)≠5,6 , or without adjacent 4-cycles with ∆(G)≠5, or with no 3-cycles adjacent to 5-cycles, then G is edge-(∆ + 1)-choosable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Heyting categories, a variant of Dedekind categories, and Arrow categories provide a convenient framework for expressing and reasoning about fuzzy relations and programs based on those methods. In this thesis we present an implementation of Heyting and arrow categories suitable for reasoning and program execution using Coq, an interactive theorem prover based on Higher-Order Logic (HOL) with dependent types. This implementation can be used to specify and develop correct software based on L-fuzzy relations such as fuzzy controllers. We give an overview of lattices, L-fuzzy relations, category theory and dependent type theory before describing our implementation. In addition, we provide examples of program executions based on our framework.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let f(x) be a complex rational function. In this work, we study conditions under which f(x) cannot be written as the composition of two rational functions which are not units under the operation of function composition. In this case, we say that f(x) is prime. We give sufficient conditions for complex rational functions to be prime in terms of their degrees and their critical values, and we derive some conditions for the case of complex polynomials. We consider also the divisibility of integral polynomials, and we present a generalization of a theorem of Nieto. We show that if f(x) and g(x) are integral polynomials such that the content of g divides the content of f and g(n) divides f(n) for an integer n whose absolute value is larger than a certain bound, then g(x) divides f(x) in Z[x]. In addition, given an integral polynomial f(x), we provide a method to determine if f is irreducible over Z, and if not, find one of its divisors in Z[x].