24 resultados para Impossibility theorem

em Helda - Digital Repository of University of Helsinki


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study I offer a diachronic solution for a number of difficult inflectional endings in Old Church Slavic nominal declensions. In this context I address the perhaps most disputed and the most important question of the Slavic nominal inflectional morphology: whether there was in Proto-Slavic an Auslautgesetz (ALG), a law of final syllables, that narrowed the Proto-Indo-European vowel */o/ to */u/ in closed word-final syllables. In addition, the work contains an exhaustive morphological classification of the nouns and adjectives that occur in canonical Old Church Slavic. I argue that Proto-Indo-European */o/ became Proto-Slavic */u/ before word-final */s/ and */N/. This conclusion is based on the impossibility of finding credible analogical (as opposed to phonological) explanations for the forms supporting the ALG hypothesis, and on the survival of the neuter gender in Slavic. It is not likely that the */o/-stem nominative singular ending */-u/ was borrowed from the accusative singular, because the latter would have been the only paradigmatic form with the stem vowel */-u-/. It is equally unlikely that the ending */-u/ was borrowed from the */u/-stems, because the latter constituted a moribund class. The usually stated motivation for such an analogical borrowing, i.e. a need to prevent the merger of */o/-stem masculines with neuters of the same class, is not tenable. Extra-Slavic, as well as intra-Slavic evidence suggests that phonologically-triggered mergers between two semantically opaque genders do not tend to be prevented, but rather that such mergers lead to the loss of the gender opposition in question. On the other hand, if */-os/ had not become */-us/, most nouns and, most importantly, all adjectives and pronouns would have lost the formal distinction between masculines and neuters. This would have necessarily resulted in the loss of the neuter gender. A new explanation is given for the most apparent piece of evidence against the ALG hypothesis, the nominative-accusative singular of the */es/-stem neuters, e.g. nebo 'sky'. I argue that it arose in late Proto-Slavic dialects, replacing regular nebe, under the influence of the */o/- and */yo/-stems where a correlation had emerged between a hard root-final consonant and the termination -o, on the one hand, and a soft root-final consonant and the termination -e, on the other.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the most fundamental questions in the philosophy of mathematics concerns the relation between truth and formal proof. The position according to which the two concepts are the same is called deflationism, and the opposing viewpoint substantialism. In an important result of mathematical logic, Kurt Gödel proved in his first incompleteness theorem that all consistent formal systems containing arithmetic include sentences that can neither be proved nor disproved within that system. However, such undecidable Gödel sentences can be established to be true once we expand the formal system with Alfred Tarski s semantical theory of truth, as shown by Stewart Shapiro and Jeffrey Ketland in their semantical arguments for the substantiality of truth. According to them, in Gödel sentences we have an explicit case of true but unprovable sentences, and hence deflationism is refuted. Against that, Neil Tennant has shown that instead of Tarskian truth we can expand the formal system with a soundness principle, according to which all provable sentences are assertable, and the assertability of Gödel sentences follows. This way, the relevant question is not whether we can establish the truth of Gödel sentences, but whether Tarskian truth is a more plausible expansion than a soundness principle. In this work I will argue that this problem is best approached once we think of mathematics as the full human phenomenon, and not just consisting of formal systems. When pre-formal mathematical thinking is included in our account, we see that Tarskian truth is in fact not an expansion at all. I claim that what proof is to formal mathematics, truth is to pre-formal thinking, and the Tarskian account of semantical truth mirrors this relation accurately. However, the introduction of pre-formal mathematics is vulnerable to the deflationist counterargument that while existing in practice, pre-formal thinking could still be philosophically superfluous if it does not refer to anything objective. Against this, I argue that all truly deflationist philosophical theories lead to arbitrariness of mathematics. In all other philosophical accounts of mathematics there is room for a reference of the pre-formal mathematics, and the expansion of Tarkian truth can be made naturally. Hence, if we reject the arbitrariness of mathematics, I argue in this work, we must accept the substantiality of truth. Related subjects such as neo-Fregeanism will also be covered, and shown not to change the need for Tarskian truth. The only remaining route for the deflationist is to change the underlying logic so that our formal languages can include their own truth predicates, which Tarski showed to be impossible for classical first-order languages. With such logics we would have no need to expand the formal systems, and the above argument would fail. From the alternative approaches, in this work I focus mostly on the Independence Friendly (IF) logic of Jaakko Hintikka and Gabriel Sandu. Hintikka has claimed that an IF language can include its own adequate truth predicate. I argue that while this is indeed the case, we cannot recognize the truth predicate as such within the same IF language, and the need for Tarskian truth remains. In addition to IF logic, also second-order logic and Saul Kripke s approach using Kleenean logic will be shown to fail in a similar fashion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In her thesis, Kaisa Kaakinen analyzes how the German emigrant author W. G. Sebald (1944-2001) uses architecture and photography in his last novel "Austerlitz" to represent time, history and remembering. Sebald describes time in spatial terms: it is like a building, the rooms and chambers of which are connected to each other. The poetics of spatial time manifests itself on multiple levels of the text. Kaakinen traces it in architectural representations, photographic images, intertextuality, as well as in the form of the text, using the concept of spatial form by Joseph Frank. Architectural and photographic representations serve as meeting points for different aspects and angles of the novel and illustrate the idea of a layered present that has multiple connections to the past. The novel tells a story of Jacques Austerlitz, who as a small child was sent from Prague to Britain in one of the so-called Kindertransports that saved children from Central Europe occupied by the National Socialists. Only gradually he remembers his Jewish parents, who have most likely perished in Nazi concentration camps. The novel brings the problematic of writing about another person's past to the fore by the fact that Austerlitz's story is told by an anonymous narrator, Austerlitz's interlocutor, who listens to and writes down Austerlitz's story. Kaakinen devotes the final part of her thesis to study the demands of representing a historical trauma, drawing on authors such as Dominick LaCapra and Michael Rothberg. Through the analysis of architectural and photographic representations in the novel, she demonstrates how Austerlitz highlights the sense of singularity and inaccessibility of memories of an individual, while also stressing the necessity - and therefore a certain kind of possibility - of passing these memories to another person. The coexistence of traumatic narrowness and of the infinity of history is reflected in ambivalent buildings. Some buildings in the novel resemble reversible figures: they can be perceived simultaneously as ruins and as construction sites. Buildings are also shown to be able to both cover and preserve memories - an idea that also is repeated in the use of photography, which tends to both replace memories and cause an experience of the presence of an absent thing. Commenting and critisizing some recent studies on Sebald, the author develops a reading which stresses the ambivalence inherent in Sebald's view on history and historiography. Austerlitz shows the need to recognize the inevitable absence of the past as well as the distance from the experiences of others. Equally important, however, is the refusal to give up narrating the past: Sebald's novel stresses the necessity to preserve the sites of the past, which carry silent traces of vanished life. The poetics of Austerlitz reflects the paradox of the simultaneous impossibility and indispensability of writing history.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation investigates changes in bank work and the experience of impossibility attached to these by workers at the local level from the viewpoint of work-related well-being and collective learning. A special challenge in my work is to conceptualize the experience of impossibility as related to change, and as a starting point and tool for development work. The subject of the dissertation, solving the impossible as a collective learning process, came up as a central theme in an earlier project: Work Units between the Old and the New (1997 – 1999). Its aim was to investigate how change is constructed as a long-term process, starting from the planning of the change until its final realization in everyday banking work. I studied changes taking place in the former Postipankki (Postal Bank), later called Leonia. The three-year study involved the Branch Office of Martinlaakso, and was conducted from the perspective of well-being in a change process. The sense of impossibility involved in changes turned out to be one of the most crucial factors impairing the sense of well-being. The work community that was the target of my study did not have the available tools to construct the change locally, or to deal with the change-related impossibility by solving it through a mutual process among themselves. During the last year of the project, I carried out an intervention for development in the Branch Office, as collaboration between the researchers and the workers. The purpose of the intervention was to resolve such perceived change-related impossibility as experienced repeatedly and considered by the work community as relevant to work-related well-being. The documentation of the intervention – audio records from development sessions, written assignments by workers and assessment or evaluation interviews – constitute the essential data for my dissertation. The earlier data, collected and analysed during the first two years, provides a historical perspective on the process, all the way from construction of the impossibility towards resolving and transcending it. The aim of my dissertation is to understand the progress of developmental intervention as a shared, possibly expansive learning process within a work community and thus to provide tools for perceiving and constructing local change. I chose the change-related impossibility as a starting point for development work in the work community and as a target of conceptualization. This, I feel, is the most important contribution of my dissertation. While the intervention was in progress, the concept of impossibility started emerging as a stimulating tool for development work. An understanding of such a process can be applied to development work outside banking work as well. According to my results, it is pivotal that a concept stimulating development is strongly connected with everyday experiences of and speech about changes in work activity, as well as with the theoretical framework of work development. During this process, development work on a local level became of utmost interest as a case study for managing change. Theoretically, this was conceptualized as so-called second-order work and this concept accompanies us all the way through the research process. Learning second-order work and constructing tools based on this work have proved crucial for promoting well-being in the change circumstances in a local work unit. The lack of second-order work has led to non-well-being and inability to transcend the change-related sense of impossibility in the work community. Solving the impossible, either individually or situationally, did not orient the workers towards solving problems of impossibility together as a work community. Because the experience of the impossibility and coming to terms with transcending it are the starting point and the target of conceptualization in this dissertation, the research provides a fresh viewpoint on the theoretical framework of change and developmental work. My dissertation can facilitate construction of local changes necessitated by the recent financial crisis, and thus promote fluency and well-being in work units. It can also support change-related well-being in other areas of working life.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The efforts of combining quantum theory with general relativity have been great and marked by several successes. One field where progress has lately been made is the study of noncommutative quantum field theories that arise as a low energy limit in certain string theories. The idea of noncommutativity comes naturally when combining these two extremes and has profound implications on results widely accepted in traditional, commutative, theories. In this work I review the status of one of the most important connections in physics, the spin-statistics relation. The relation is deeply ingrained in our reality in that it gives us the structure for the periodic table and is of crucial importance for the stability of all matter. The dramatic effects of noncommutativity of space-time coordinates, mainly the loss of Lorentz invariance, call the spin-statistics relation into question. The spin-statistics theorem is first presented in its traditional setting, giving a clarifying proof starting from minimal requirements. Next the notion of noncommutativity is introduced and its implications studied. The discussion is essentially based on twisted Poincaré symmetry, the space-time symmetry of noncommutative quantum field theory. The controversial issue of microcausality in noncommutative quantum field theory is settled by showing for the first time that the light wedge microcausality condition is compatible with the twisted Poincaré symmetry. The spin-statistics relation is considered both from the point of view of braided statistics, and in the traditional Lagrangian formulation of Pauli, with the conclusion that Pauli's age-old theorem stands even this test so dramatic for the whole structure of space-time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is well known that an integrable (in the sense of Arnold-Jost) Hamiltonian system gives rise to quasi-periodic motion with trajectories running on invariant tori. These tori foliate the whole phase space. If we perturb an integrable system, the Kolmogorow-Arnold-Moser (KAM) theorem states that, provided some non-degeneracy condition and that the perturbation is sufficiently small, most of the invariant tori carrying quasi-periodic motion persist, getting only slightly deformed. The measure of the persisting invariant tori is large together with the inverse of the size of the perturbation. In the first part of the thesis we shall use a Renormalization Group (RG) scheme in order to prove the classical KAM result in the case of a non analytic perturbation (the latter will only be assumed to have continuous derivatives up to a sufficiently large order). We shall proceed by solving a sequence of problems in which theperturbations are analytic approximations of the original one. We will finally show that the approximate solutions will converge to a differentiable solution of our original problem. In the second part we will use an RG scheme using continuous scales, so that instead of solving an iterative equation as in the classical RG KAM, we will end up solving a partial differential equation. This will allow us to reduce the complications of treating a sequence of iterative equations to the use of the Banach fixed point theorem in a suitable Banach space.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The monograph dissertation deals with kernel integral operators and their mapping properties on Euclidean domains. The associated kernels are weakly singular and examples of such are given by Green functions of certain elliptic partial differential equations. It is well known that mapping properties of the corresponding Green operators can be used to deduce a priori estimates for the solutions of these equations. In the dissertation, natural size- and cancellation conditions are quantified for kernels defined in domains. These kernels induce integral operators which are then composed with any partial differential operator of prescribed order, depending on the size of the kernel. The main object of study in this dissertation being the boundedness properties of such compositions, the main result is the characterization of their Lp-boundedness on suitably regular domains. In case the aforementioned kernels are defined in the whole Euclidean space, their partial derivatives of prescribed order turn out to be so called standard kernels that arise in connection with singular integral operators. The Lp-boundedness of singular integrals is characterized by the T1 theorem, which is originally due to David and Journé and was published in 1984 (Ann. of Math. 120). The main result in the dissertation can be interpreted as a T1 theorem for weakly singular integral operators. The dissertation deals also with special convolution type weakly singular integral operators that are defined on Euclidean spaces.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concept of an atomic decomposition was introduced by Coifman and Rochberg (1980) for weighted Bergman spaces on the unit disk. By the Riemann mapping theorem, functions in every simply connected domain in the complex plane have an atomic decomposition. However, a decomposition resulting from a conformal mapping of the unit disk tends to be very implicit and often lacks a clear connection to the geometry of the domain that it has been mapped into. The lattice of points, where the atoms of the decomposition are evaluated, usually follows the geometry of the original domain, but after mapping the domain into another this connection is easily lost and the layout of points becomes seemingly random. In the first article we construct an atomic decomposition directly on a weighted Bergman space on a class of regulated, simply connected domains. The construction uses the geometric properties of the regulated domain, but does not explicitly involve any conformal Riemann map from the unit disk. It is known that the Bergman projection is not bounded on the space L-infinity of bounded measurable functions. Taskinen (2004) introduced the locally convex spaces LV-infinity consisting of measurable and HV-infinity of analytic functions on the unit disk with the latter being a closed subspace of the former. They have the property that the Bergman projection is continuous from LV-infinity onto HV-infinity and, in some sense, the space HV-infinity is the smallest possible substitute to the space H-infinity of analytic functions. In the second article we extend the above result to a smoothly bounded strictly pseudoconvex domain. Here the related reproducing kernels are usually not known explicitly, and thus the proof of continuity of the Bergman projection is based on generalised Forelli-Rudin estimates instead of integral representations. The minimality of the space LV-infinity is shown by using peaking functions first constructed by Bell (1981). Taskinen (2003) showed that on the unit disk the space HV-infinity admits an atomic decomposition. This result is generalised in the third article by constructing an atomic decomposition for the space HV-infinity on a smoothly bounded strictly pseudoconvex domain. In this case every function can be presented as a linear combination of atoms such that the coefficient sequence belongs to a suitable Köthe co-echelon space.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The research in model theory has extended from the study of elementary classes to non-elementary classes, i.e. to classes which are not completely axiomatizable in elementary logic. The main theme has been the attempt to generalize tools from elementary stability theory to cover more applications arising in other branches of mathematics. In this doctoral thesis we introduce finitary abstract elementary classes, a non-elementary framework of model theory. These classes are a special case of abstract elementary classes (AEC), introduced by Saharon Shelah in the 1980's. We have collected a set of properties for classes of structures, which enable us to develop a 'geometric' approach to stability theory, including an independence calculus, in a very general framework. The thesis studies AEC's with amalgamation, joint embedding, arbitrarily large models, countable Löwenheim-Skolem number and finite character. The novel idea is the property of finite character, which enables the use of a notion of a weak type instead of the usual Galois type. Notions of simplicity, superstability, Lascar strong type, primary model and U-rank are inroduced for finitary classes. A categoricity transfer result is proved for simple, tame finitary classes: categoricity in any uncountable cardinal transfers upwards and to all cardinals above the Hanf number. Unlike the previous categoricity transfer results of equal generality the theorem does not assume the categoricity cardinal being a successor. The thesis consists of three independent papers. All three papers are joint work with Tapani Hyttinen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The stochastic filtering has been in general an estimation of indirectly observed states given observed data. This means that one is discussing conditional expected values as being one of the most accurate estimation, given the observations in the context of probability space. In my thesis, I have presented the theory of filtering using two different kind of observation process: the first one is a diffusion process which is discussed in the first chapter, while the third chapter introduces the latter which is a counting process. The majority of the fundamental results of the stochastic filtering is stated in form of interesting equations, such the unnormalized Zakai equation that leads to the Kushner-Stratonovich equation. The latter one which is known also by the normalized Zakai equation or equally by Fujisaki-Kallianpur-Kunita (FKK) equation, shows the divergence between the estimate using a diffusion process and a counting process. I have also introduced an example for the linear gaussian case, which is mainly the concept to build the so-called Kalman-Bucy filter. As the unnormalized and the normalized Zakai equations are in terms of the conditional distribution, a density of these distributions will be developed through these equations and stated by Kushner Theorem. However, Kushner Theorem has a form of a stochastic partial differential equation that needs to be verify in the sense of the existence and uniqueness of its solution, which is covered in the second chapter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tools known as maximal functions are frequently used in harmonic analysis when studying local behaviour of functions. Typically they measure the suprema of local averages of non-negative functions. It is essential that the size (more precisely, the L^p-norm) of the maximal function is comparable to the size of the original function. When dealing with families of operators between Banach spaces we are often forced to replace the uniform bound with the larger R-bound. Hence such a replacement is also needed in the maximal function for functions taking values in spaces of operators. More specifically, the suprema of norms of local averages (i.e. their uniform bound in the operator norm) has to be replaced by their R-bound. This procedure gives us the Rademacher maximal function, which was introduced by Hytönen, McIntosh and Portal in order to prove a certain vector-valued Carleson's embedding theorem. They noticed that the sizes of an operator-valued function and its Rademacher maximal function are comparable for many common range spaces, but not for all. Certain requirements on the type and cotype of the spaces involved are necessary for this comparability, henceforth referred to as the “RMF-property”. It was shown, that other objects and parameters appearing in the definition, such as the domain of functions and the exponent p of the norm, make no difference to this. After a short introduction to randomized norms and geometry in Banach spaces we study the Rademacher maximal function on Euclidean spaces. The requirements on the type and cotype are considered, providing examples of spaces without RMF. L^p-spaces are shown to have RMF not only for p greater or equal to 2 (when it is trivial) but also for 1 < p < 2. A dyadic version of Carleson's embedding theorem is proven for scalar- and operator-valued functions. As the analysis with dyadic cubes can be generalized to filtrations on sigma-finite measure spaces, we consider the Rademacher maximal function in this case as well. It turns out that the RMF-property is independent of the filtration and the underlying measure space and that it is enough to consider very simple ones known as Haar filtrations. Scalar- and operator-valued analogues of Carleson's embedding theorem are also provided. With the RMF-property proven independent of the underlying measure space, we can use probabilistic notions and formulate it for martingales. Following a similar result for UMD-spaces, a weak type inequality is shown to be (necessary and) sufficient for the RMF-property. The RMF-property is also studied using concave functions giving yet another proof of its independence from various parameters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

At the Tevatron, the total p_bar-p cross-section has been measured by CDF at 546 GeV and 1.8 TeV, and by E710/E811 at 1.8 TeV. The two results at 1.8 TeV disagree by 2.6 standard deviations, introducing big uncertainties into extrapolations to higher energies. At the LHC, the TOTEM collaboration is preparing to resolve the ambiguity by measuring the total p-p cross-section with a precision of about 1 %. Like at the Tevatron experiments, the luminosity-independent method based on the Optical Theorem will be used. The Tevatron experiments have also performed a vast range of studies about soft and hard diffractive events, partly with antiproton tagging by Roman Pots, partly with rapidity gap tagging. At the LHC, the combined CMS/TOTEM experiments will carry out their diffractive programme with an unprecedented rapidity coverage and Roman Pot spectrometers on both sides of the interaction point. The physics menu comprises detailed studies of soft diffractive differential cross-sections, diffractive structure functions, rapidity gap survival and exclusive central production by Double Pomeron Exchange.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ingarden (1962, 1964) postulates that artworks exist in an “Objective purely intentional” way. According to this view, objectivity and subjectivity are opposed forms of existence, parallel to the opposition between realism and idealism. Using arguments of cognitive science, experimental psychology, and semiotics, this lecture proposes that, particularly in the aesthetic phenomena, realism and idealism are not pure oppositions; rather they are aspects of a single process of cognition in different strata. Furthermore, the concept of realism can be conceived as an empirical extreme of idealism, and the concept of idealism can be conceived as a pre-operative extreme of realism. Both kind of systems of knowledge are mutually associated by a synecdoche, performing major tasks of mental order and categorisation. This contribution suggests that the supposed opposition between objectivity and subjectivity, raises, first of all, a problem of translatability, more than a problem of existential categories. Synecdoche seems to be a very basic transaction of the mind, establishing ontologies (in the more Ingardean way of the term). Wegrzecki (1994, 220) defines ontology as “the central domain of philosophy to which other its parts directly or indirectly refer”. Thus, ontology operates within philosophy as the synecdoche does within language, pointing the sense of the general into the particular and/or viceversa. The many affinities and similarities between different sign systems, like those found across the interrelationships of the arts, are embedded into a transversal, synecdochic intersemiosis. An important question, from this view, is whether Ingardean’s pure objectivities lie basically on the impossibility of translation, therefore being absolute self-referential constructions. In such a case, it would be impossible to translate pure intentionality into something else, like acts or products.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nucleation is the first step in a phase transition where small nuclei of the new phase start appearing in the metastable old phase, such as the appearance of small liquid clusters in a supersaturated vapor. Nucleation is important in various industrial and natural processes, including atmospheric new particle formation: between 20 % to 80 % of atmospheric particle concentration is due to nucleation. These atmospheric aerosol particles have a significant effect both on climate and human health. Different simulation methods are often applied when studying things that are difficult or even impossible to measure, or when trying to distinguish between the merits of various theoretical approaches. Such simulation methods include, among others, molecular dynamics and Monte Carlo simulations. In this work molecular dynamics simulations of the homogeneous nucleation of Lennard-Jones argon have been performed. Homogeneous means that the nucleation does not occur on a pre-existing surface. The simulations include runs where the starting configuration is a supersaturated vapor and the nucleation event is observed during the simulation (direct simulations), as well as simulations of a cluster in equilibrium with a surrounding vapor (indirect simulations). The latter type are a necessity when the conditions prevent the occurrence of a nucleation event in a reasonable timeframe in the direct simulations. The effect of various temperature control schemes on the nucleation rate (the rate of appearance of clusters that are equally able to grow to macroscopic sizes and to evaporate) was studied and found to be relatively small. The method to extract the nucleation rate was also found to be of minor importance. The cluster sizes from direct and indirect simulations were used in conjunction with the nucleation theorem to calculate formation free energies for the clusters in the indirect simulations. The results agreed with density functional theory, but were higher than values from Monte Carlo simulations. The formation energies were also used to calculate surface tension for the clusters. The sizes of the clusters in the direct and indirect simulations were compared, showing that the direct simulation clusters have more atoms between the liquid-like core of the cluster and the surrounding vapor. Finally, the performance of various nucleation theories in predicting simulated nucleation rates was investigated, and the results among other things highlighted once again the inadequacy of the classical nucleation theory that is commonly employed in nucleation studies.