927 resultados para Equivalence Proof
Resumo:
This is a continuation of the earlier work (Publ. Res. Inst. Math. Sci. 45 (2009) 745-785) to characterize unitary stationary independent increment Gaussian processes. The earlier assumption of uniform continuity is replaced by weak continuity and with technical assumptions on the domain of the generator, unitary equivalence of the process to the solution of an appropriate Hudson-Parthasarathy equation is proved.
Resumo:
This dissertation is a broad study of factors affecting perceptions of CSR issues in multiple stakeholder realms, the main purpose being to determine the effects of the values of individuals on their perceptions regarding CSR. It examines perceptions of CSR both at the emic (observing individuals and stakeholders) and etic levels (conducting cross-cultural comparison) through a descriptive-empirical research strategy. The dissertation is based on quantitative interview data among Chinese, Finnish and US stakeholder groups of industry companies (with an emphasis on the forest industries) and consists of four published articles and two submitted manuscripts. Theoretically, this dissertation provides a valuable and unique philosophical and intellectual perspective on the contemporary study of CSR `The Harmony Approach to CSR'. Empirically, this dissertation does values assessment and CSR evaluation of a wide variety of business activities covering CSR reporting, business ethics, and three dimensions of CSR performance. From the multi-stakeholder perspective, this dissertation use survey methods to examine the perceptions and stakeholder salience in the context of CSR by describing, comparing the differences between demographic factors as well as hypothetical drivers behind perceptions. The results of study suggest that the CSR objective of a corporation's top management should be to manage the divergent and conflicting interests of multiple stakeholders, taking others than key stakeholders into account as well. The importance of values as a driver of ethical behaviour and decision-making has been generally recognized. This dissertation provides more empirical proof of this theory by highlighting the effects of values on CSR perceptions. It suggests that since the way to encourage responsible behaviour and develop CSR is to develop individual values and cultivate their virtues, it is time to invoke the critical role of moral (ethics) education. The specific studies of China and comparison between Finland and the US contribute to a common understanding of the emerging CSR issues, problems and opportunities for the future of sustainability. The similarities among these countries can enhance international cooperation, while the differences will open up opportunities and diversified solutions for CSR in local conditions.
Resumo:
Single molecule force clamp experiments are widely used to investigate how enzymes, molecular motors, and other molecular mechanisms work. We developed a dual-trap optical tweezers instrument with real-time (200 kHz update rate) force clamp control that can exert 0–100 pN forces on trapped beads. A model for force clamp experiments in the dumbbell-geometry is presented. We observe good agreement between predicted and observed power spectra of bead position and force fluctuations. The model can be used to predict and optimize the dynamics of real-time force clamp optical tweezers instruments. The results from a proof-of-principle experiment in which lambda exonuclease converts a double-stranded DNA tether, held at constant tension, into its single-stranded form, show that the developed instrument is suitable for experiments in single molecule biology.
Resumo:
The p53-family consists of three transcription factors, p53, p73 and p63. The family members have similar but also individual functions connected to cell cycle regulation, development and tumorigenesis. p53 and p73 act mainly as tumor suppressors. During DNA damage caused by anticancer drugs or irradiation, p53 and p73 levels are upregulated in cancer cells leading to apoptosis and cell cycle arrest. p53 is mutated in almost 50 per cent of the cancers, causing the cancer cells unable to undergo cell death. Instead, p73 is rarely mutated in cancer cells and because of that could be more viable target for anticancer therapy. The network surrounding the regulation of p73 is extensive and has several potential targets for cancer therapy. One of the most studied is Itch ligase, the negative regulator of p73 levels. Gene therapy directed towards knockdown of Itch ligase is a potential approach but in need for more in vivo proof. p73 has two isoforms, transactivating TA-forms and dominant-negative ΔN-forms. The specific regulation of these isoforms could also offer a possible way for more effective cancer treatment. The literature work includes information of structures, isoforms, functions and possible therapeutic targets of p73. Also the main therapeutic approaches to date are introduced. The experimental part is based on transfection and cytotoxicity studies done e.g. in pancreatic cancer cells (Mia PaCa-2, PANC1, BxPc-3 and HPAC). The aim of the experimental work was to optimize the conditions for effective transfection with DAB16 dendrimer nanoparticles and to measure the cytotoxicity of plain dendrimers and DAB16-pDNA complexes. Also the protein levels of p73 and Itch ligase were measured by Western blotting. The work was done as a part of a bigger project, which was aiming to down regulate Itch ligase (negative regulator of p73) by siRNA/shRNA. Tranfection results were promising, showing good transfection efficacy with DAB16 N/P30 in pancreatic cancer cells (except in BxPc-3). Pancreatic cancer cells showed recovery in 3 days after they were exposed to plain dendrimer solution or to DAB16-pDNA. Measurement of protein levels by Western blotting was not optimal and the proposals for the improvement regarding e.g. the gels and the extracted protein amounts have been done.
Resumo:
We propose a method to compute a probably approximately correct (PAC) normalized histogram of observations with a refresh rate of Theta(1) time units per histogram sample on a random geometric graph with noise-free links. The delay in computation is Theta(root n) time units. We further extend our approach to a network with noisy links. While the refresh rate remains Theta(1) time units per sample, the delay increases to Theta(root n log n). The number of transmissions in both cases is Theta(n) per histogram sample. The achieved Theta(1) refresh rate for PAC histogram computation is a significant improvement over the refresh rate of Theta(1/log n) for histogram computation in noiseless networks. We achieve this by operating in the supercritical thermodynamic regime where large pathways for communication build up, but the network may have more than one component. The largest component however will have an arbitrarily large fraction of nodes in order to enable approximate computation of the histogram to the desired level of accuracy. Operation in the supercritical thermodynamic regime also reduces energy consumption. A key step in the proof of our achievability result is the construction of a connected component having bounded degree and any desired fraction of nodes. This construction may also prove useful in other communication settings on the random geometric graph.
Resumo:
The most prominent objective of the thesis is the development of the generalized descriptive set theory, as we call it. There, we study the space of all functions from a fixed uncountable cardinal to itself, or to a finite set of size two. These correspond to generalized notions of the universal Baire space (functions from natural numbers to themselves with the product topology) and the Cantor space (functions from natural numbers to the {0,1}-set) respectively. We generalize the notion of Borel sets in three different ways and study the corresponding Borel structures with the aims of generalizing classical theorems of descriptive set theory or providing counter examples. In particular we are interested in equivalence relations on these spaces and their Borel reducibility to each other. The last chapter shows, using game-theoretic techniques, that the order of Borel equivalence relations under Borel reduciblity has very high complexity. The techniques in the above described set theoretical side of the thesis include forcing, general topological notions such as meager sets and combinatorial games of infinite length. By coding uncountable models to functions, we are able to apply the understanding of the generalized descriptive set theory to the model theory of uncountable models. The links between the theorems of model theory (including Shelah's classification theory) and the theorems in pure set theory are provided using game theoretic techniques from Ehrenfeucht-Fraïssé games in model theory to cub-games in set theory. The bottom line of the research declairs that the descriptive (set theoretic) complexity of an isomorphism relation of a first-order definable model class goes in synch with the stability theoretical complexity of the corresponding first-order theory. The first chapter of the thesis has slightly different focus and is purely concerned with a certain modification of the well known Ehrenfeucht-Fraïssé games. There we (me and my supervisor Tapani Hyttinen) answer some natural questions about that game mainly concerning determinacy and its relation to the standard EF-game
Resumo:
This monograph describes the emergence of independent research on logic in Finland. The emphasis is placed on three well-known students of Eino Kaila: Georg Henrik von Wright (1916-2003), Erik Stenius (1911-1990), and Oiva Ketonen (1913-2000), and their research between the early 1930s and the early 1950s. The early academic work of these scholars laid the foundations for today's strong tradition in logic in Finland and also became internationally recognized. However, due attention has not been given to these works later, nor have they been comprehensively presented together. Each chapter of the book focuses on the life and work of one of Kaila's aforementioned students, with a fourth chapter discussing works on logic by authors who would later become known within other disciplines. Through an extensive use of correspondence and other archived material, some insight has been gained into the persons behind the academic personae. Unique and unpublished biographical material has been available for this task. The chapter on Oiva Ketonen focuses primarily on his work on what is today known as proof theory, especially on his proof theoretical system with invertible rules that permits a terminating root-first proof search. The independency of the parallel postulate is proved as an example of the strength of root-first proof search. Ketonen was to our knowledge Gerhard Gentzen's (the 'father' of proof theory) only student. Correspondence and a hitherto unavailable autobiographic manuscript, in addition to an unpublished article on the relationship between logic and epistemology, is presented. The chapter on Erik Stenius discusses his work on paradoxes and set theory, more specifically on how a rigid theory of definitions is employed to avoid these paradoxes. A presentation by Paul Bernays on Stenius' attempt at a proof of the consistency of arithmetic is reconstructed based on Bernays' lecture notes. Stenius correspondence with Paul Bernays, Evert Beth, and Georg Kreisel is discussed. The chapter on Georg Henrik von Wright presents his early work on probability and epistemology, along with his later work on modal logic that made him internationally famous. Correspondence from various archives (especially with Kaila and Charlie Dunbar Broad) further discusses his academic achievements and his experiences during the challenging circumstances of the 1940s.
Resumo:
In this paper, a dual of a given linear fractional program is defined and the weak, direct and converse duality theorems are proved. Both the primal and the dual are linear fractional programs. This duality theory leads to necessary and sufficient conditions for the optimality of a given feasible solution. A unmerical example is presented to illustrate the theory in this connection. The equivalence of Charnes and Cooper dual and Dinkelbach’s parametric dual of a linear fractional program is also established.
Resumo:
This work studies decision problems from the perspective of nondeterministic distributed algorithms. For a yes-instance there must exist a proof that can be verified with a distributed algorithm: all nodes must accept a valid proof, and at least one node must reject an invalid proof. We focus on locally checkable proofs that can be verified with a constant-time distributed algorithm. For example, it is easy to prove that a graph is bipartite: the locally checkable proof gives a 2-colouring of the graph, which only takes 1 bit per node. However, it is more difficult to prove that a graph is not bipartite—it turns out that any locally checkable proof requires Ω(log n) bits per node. In this work we classify graph problems according to their local proof complexity, i.e., how many bits per node are needed in a locally checkable proof. We establish tight or near-tight results for classical graph properties such as the chromatic number. We show that the proof complexities form a natural hierarchy of complexity classes: for many classical graph problems, the proof complexity is either 0, Θ(1), Θ(log n), or poly(n) bits per node. Among the most difficult graph properties are symmetric graphs, which require Ω(n2) bits per node, and non-3-colourable graphs, which require Ω(n2/log n) bits per node—any pure graph property admits a trivial proof of size O(n2).
Resumo:
Clustering is a process of partitioning a given set of patterns into meaningful groups. The clustering process can be viewed as consisting of the following three phases: (i) feature selection phase, (ii) classification phase, and (iii) description generation phase. Conventional clustering algorithms implicitly use knowledge about the clustering environment to a large extent in the feature selection phase. This reduces the need for the environmental knowledge in the remaining two phases, permitting the usage of simple numerical measure of similarity in the classification phase. Conceptual clustering algorithms proposed by Michalski and Stepp [IEEE Trans. PAMI, PAMI-5, 396–410 (1983)] and Stepp and Michalski [Artif. Intell., pp. 43–69 (1986)] make use of the knowledge about the clustering environment in the form of a set of predefined concepts to compute the conceptual cohesiveness during the classification phase. Michalski and Stepp [IEEE Trans. PAMI, PAMI-5, 396–410 (1983)] have argued that the results obtained with the conceptual clustering algorithms are superior to conventional methods of numerical classification. However, this claim was not supported by the experimental results obtained by Dale [IEEE Trans. PAMI, PAMI-7, 241–244 (1985)]. In this paper a theoretical framework, based on an intuitively appealing set of axioms, is developed to characterize the equivalence between the conceptual clustering and conventional clustering. In other words, it is shown that any classification obtained using conceptual clustering can also be obtained using conventional clustering and vice versa.
Resumo:
Various Tb theorems play a key role in the modern harmonic analysis. They provide characterizations for the boundedness of Calderón-Zygmund type singular integral operators. The general philosophy is that to conclude the boundedness of an operator T on some function space, one needs only to test it on some suitable function b. The main object of this dissertation is to prove very general Tb theorems. The dissertation consists of four research articles and an introductory part. The framework is general with respect to the domain (a metric space), the measure (an upper doubling measure) and the range (a UMD Banach space). Moreover, the used testing conditions are weak. In the first article a (global) Tb theorem on non-homogeneous metric spaces is proved. One of the main technical components is the construction of a randomization procedure for the metric dyadic cubes. The difficulty lies in the fact that metric spaces do not, in general, have a translation group. Also, the measures considered are more general than in the existing literature. This generality is genuinely important for some applications, including the result of Volberg and Wick concerning the characterization of measures for which the analytic Besov-Sobolev space embeds continuously into the space of square integrable functions. In the second article a vector-valued extension of the main result of the first article is considered. This theorem is a new contribution to the vector-valued literature, since previously such general domains and measures were not allowed. The third article deals with local Tb theorems both in the homogeneous and non-homogeneous situations. A modified version of the general non-homogeneous proof technique of Nazarov, Treil and Volberg is extended to cover the case of upper doubling measures. This technique is also used in the homogeneous setting to prove local Tb theorems with weak testing conditions introduced by Auscher, Hofmann, Muscalu, Tao and Thiele. This gives a completely new and direct proof of such results utilizing the full force of non-homogeneous analysis. The final article has to do with sharp weighted theory for maximal truncations of Calderón-Zygmund operators. This includes a reduction to certain Sawyer-type testing conditions, which are in the spirit of Tb theorems and thus of the dissertation. The article extends the sharp bounds previously known only for untruncated operators, and also proves sharp weak type results, which are new even for untruncated operators. New techniques are introduced to overcome the difficulties introduced by the non-linearity of maximal truncations.
Resumo:
This thesis is concerned with the area of vector-valued Harmonic Analysis, where the central theme is to determine how results from classical Harmonic Analysis generalize to functions with values in an infinite dimensional Banach space. The work consists of three articles and an introduction. The first article studies the Rademacher maximal function that was originally defined by T. Hytönen, A. McIntosh and P. Portal in 2008 in order to prove a vector-valued version of Carleson's embedding theorem. The boundedness of the corresponding maximal operator on Lebesgue-(Bochner) -spaces defines the RMF-property of the range space. It is shown that the RMF-property is equivalent to a weak type inequality, which does not depend for instance on the integrability exponent, hence providing more flexibility for the RMF-property. The second article, which is written in collaboration with T. Hytönen, studies a vector-valued Carleson's embedding theorem with respect to filtrations. An earlier proof of the dyadic version assumed that the range space satisfies a certain geometric type condition, which this article shows to be also necessary. The third article deals with a vector-valued generalizations of tent spaces, originally defined by R. R. Coifman, Y. Meyer and E. M. Stein in the 80's, and concerns especially the ones related to square functions. A natural assumption on the range space is then the UMD-property. The main result is an atomic decomposition for tent spaces with integrability exponent one. In order to suit the stochastic integrals appearing in the vector-valued formulation, the proof is based on a geometric lemma for cones and differs essentially from the classical proof. Vector-valued tent spaces have also found applications in functional calculi for bisectorial operators. In the introduction these three themes come together when studying paraproduct operators for vector-valued functions. The Rademacher maximal function and Carleson's embedding theorem were applied already by Hytönen, McIntosh and Portal in order to prove boundedness for the dyadic paraproduct operator on Lebesgue-Bochner -spaces assuming that the range space satisfies both UMD- and RMF-properties. Whether UMD implies RMF is thus an interesting question. Tent spaces, on the other hand, provide a method to study continuous time paraproduct operators, although the RMF-property is not yet understood in the framework of tent spaces.
Resumo:
The stereochemistry of the Diels-Alder cycloaddition of several dienes to the facially perturbed dienophiles 2,3-norbornenobenzoquinone (3) and 2,3-norbornanobenzoquinone (4) has been examined. Unambiguous structural proof for the adducts formed has been obtained from complementary 'H and I3C NMR spectral data and in two cases through X-ray crystal structure determination. While 1,3-~yclopentadiene1, ,3-~yclohexadienea, nd cyclooctatetraene exhibit preference for addition to 3 from the bottom side, the stereochemical outcome is reversed in their response to 4.1,3-DiphenyIisobenzofuran and 1,2,3,4-tetrachloro-5,5-dimethoxycyclopentadieenneg aged 3 from the top side with marked selectivity, which is further enhanced in their reaction with 4. The observed stereoselectivities seem to be essentially controlled by steric interactons at the transition state. Model calculations provide support for this interpretation.
Resumo:
The set of attainable laws of the joint state-control process of a controlled diffusion is analyzed from a convex analytic viewpoint. Various equivalence relations depending on one-dimensional marginals thereof are defined on this set and the corresponding equivalence classes are studied.
Resumo:
A generalization of Nash-Williams′ lemma is proved for the Structure of m-uniform null (m − k)-designs. It is then applied to various graph reconstruction problems. A short combinatorial proof of the edge reconstructibility of digraphs having regular underlying undirected graphs (e.g., tournaments) is given. A type of Nash-Williams′ lemma is conjectured for the vertex reconstruction problem.