14 resultados para Veja and biased
em Helda - Digital Repository of University of Helsinki
Resumo:
The cosmological observations of light from type Ia supernovae, the cosmic microwave background and the galaxy distribution seem to indicate that the expansion of the universe has accelerated during the latter half of its age. Within standard cosmology, this is ascribed to dark energy, a uniform fluid with large negative pressure that gives rise to repulsive gravity but also entails serious theoretical problems. Understanding the physical origin of the perceived accelerated expansion has been described as one of the greatest challenges in theoretical physics today. In this thesis, we discuss the possibility that, instead of dark energy, the acceleration would be caused by an effect of the nonlinear structure formation on light, ignored in the standard cosmology. A physical interpretation of the effect goes as follows: due to the clustering of the initially smooth matter with time as filaments of opaque galaxies, the regions where the detectable light travels get emptier and emptier relative to the average. As the developing voids begin to expand the faster the lower their matter density becomes, the expansion can then accelerate along our line of sight without local acceleration, potentially obviating the need for the mysterious dark energy. In addition to offering a natural physical interpretation to the acceleration, we have further shown that an inhomogeneous model is able to match the main cosmological observations without dark energy, resulting in a concordant picture of the universe with 90% dark matter, 10% baryonic matter and 15 billion years as the age of the universe. The model also provides a smart solution to the coincidence problem: if induced by the voids, the onset of the perceived acceleration naturally coincides with the formation of the voids. Additional future tests include quantitative predictions for angular deviations and a theoretical derivation of the model to reduce the required phenomenology. A spin-off of the research is a physical classification of the cosmic inhomogeneities according to how they could induce accelerated expansion along our line of sight. We have identified three physically distinct mechanisms: global acceleration due to spatial variations in the expansion rate, faster local expansion rate due to a large local void and biased light propagation through voids that expand faster than the average. A general conclusion is that the physical properties crucial to account for the perceived acceleration are the growth of the inhomogeneities and the inhomogeneities in the expansion rate. The existence of these properties in the real universe is supported by both observational data and theoretical calculations. However, better data and more sophisticated theoretical models are required to vindicate or disprove the conjecture that the inhomogeneities are responsible for the acceleration.
Resumo:
The present research focused on motivational and personality traits measuring individual differences in the experience of negative affect, in reactivity to negative events, and in the tendency to avoid threats. In this thesis, such traits (i.e., neuroticism and dispositional avoidance motivation) are jointly referred to as trait avoidance motivation. The seven studies presented here examined the moderators of such traits in predicting risk judgments, negatively biased processing, and adjustment. Given that trait avoidance motivation encompasses reactivity to negative events and tendency to avoid threats, it can be considered surprising that this trait does not seem to be related to risk judgments and that it seems to be inconsistently related to negatively biased information processing. Previous work thus suggests that some variable(s) moderate these relations. Furthermore, recent research has suggested that despite the close connection between trait avoidance motivation and (mal)adjustment, measures of cognitive performance may moderate this connection. However, it is unclear whether this moderation is due to different response processes between individuals with different cognitive tendencies or abilities, or to the genuinely buffering effect of high cognitive ability against the negative consequences of high trait avoidance motivation. Studies 1-3 showed that there is a modest direct relation between trait avoidance motivation and risk judgments, but studies 2-3 demonstrated that state motivation moderates this relation. In particular, individuals in an avoidance state made high risk judgments regardless of their level of trait avoidance motivation. This result explained the disparity between the theoretical conceptualization of avoidance motivation and the results of previous studies suggesting that the relation between trait avoidance motivation and risk judgments is weak or nonexistent. Studies 5-6 examined threat identification tendency as a moderator for the relationship between trait avoidance motivation and negatively biased processing. However, no evidence for such moderation was found. Furthermore, in line with previous work, the results of studies 5-6 suggested that trait avoidance motivation is inconsistently related to negatively biased processing, implying that theories concerning traits and information processing may need refining. Study 7 examined cognitive ability as a moderator for the relation between trait avoidance motivation and adjustment, and demonstrated that cognitive ability moderates the relation between trait avoidance motivation and indicators of both self-reported and objectively measured adjustment. Thus, the results of Study 7 supported the buffer explanation for the moderating influence of cognitive performance. To summarize, the results showed that it is possible to find factors that consistently moderate the relations between traits and important outcomes (e.g. adjustment). Identifying such factors and studying their interplay with traits is one of the most important goals of current personality research. The present thesis contributed to this line of work in relation to trait avoidance motivation.
Resumo:
Eutrophication of the Baltic Sea is a serious problem. This thesis estimates the benefit to Finns from reduced eutrophication in the Gulf of Finland, the most eutrophied part of the Baltic Sea, by applying the choice experiment method, which belongs to the family of stated preference methods. Because stated preference methods have been subject to criticism, e.g., due to their hypothetical survey context, this thesis contributes to the discussion by studying two anomalies that may lead to biased welfare estimates: respondent uncertainty and preference discontinuity. The former refers to the difficulty of stating one s preferences for an environmental good in a hypothetical context. The latter implies a departure from the continuity assumption of conventional consumer theory, which forms the basis for the method and the analysis. In the three essays of the thesis, discrete choice data are analyzed with the multinomial logit and mixed logit models. On average, Finns are willing to contribute to the water quality improvement. The probability for willingness increases with residential or recreational contact with the gulf, higher than average income, younger than average age, and the absence of dependent children in the household. On average, for Finns the relatively most important characteristic of water quality is water clarity followed by the desire for fewer occurrences of blue-green algae. For future nutrient reduction scenarios, the annual mean household willingness to pay estimates range from 271 to 448 and the aggregate welfare estimates for Finns range from 28 billion to 54 billion euros, depending on the model and the intensity of the reduction. Out of the respondents (N=726), 72.1% state in a follow-up question that they are either Certain or Quite certain about their answer when choosing the preferred alternative in the experiment. Based on the analysis of other follow-up questions and another sample (N=307), 10.4% of the respondents are identified as potentially having discontinuous preferences. In relation to both anomalies, the respondent- and questionnaire-specific variables are found among the underlying causes and a departure from standard analysis may improve the model fit and the efficiency of estimates, depending on the chosen modeling approach. The introduction of uncertainty about the future state of the Gulf increases the acceptance of the valuation scenario which may indicate an increased credibility of a proposed scenario. In conclusion, modeling preference heterogeneity is an essential part of the analysis of discrete choice data. The results regarding uncertainty in stating one s preferences and non-standard choice behavior are promising: accounting for these anomalies in the analysis may improve the precision of the estimates of benefit from reduced eutrophication in the Gulf of Finland.
Resumo:
In this paper both documentary and natural proxy data have been used to improve the accuracy of palaeoclimatic knowledge in Finland since the 18th century. Early meteorological observations from Turku (1748-1800) were analyzed first as a potential source of climate variability. The reliability of the calculated mean temperatures was evaluated by comparing them with those of contemporary temperature records from Stockholm, St. Petersburg and Uppsala. The resulting monthly, seasonal and yearly mean temperatures from 1748 to 1800 were compared with the present day mean values (1961-1990): the comparison suggests that the winters of the period 1749-1800 were 0.8 ºC colder than today, while the summers were 0.4 ºC warmer. Over the same period, springs were 0.9 ºC and autumns 0.1 ºC colder than today. Despite their uncertainties when compared with modern meteorological data, early temperature measurements offer direct and daily information about the weather for all months of the year, in contrast with other proxies. Secondly, early meteorological observations from Tornio (1737-1749) and Ylitornio (1792-1838) were used to study the temporal behaviour of the climate-tree growth relationship during the past three centuries in northern Finland. Analyses showed that the correlations between ring widths and mid-summer (July) temperatures did not vary significantly as a function of time. Early (June) and late summer (August) mean temperatures were secondary to mid-summer temperatures in controlling the radial growth. According the dataset used, there was no clear signature of temporally reduced sensitivity of Scots pine ring widths to mid-summer temperatures over the periods of early and modern meteorological observations. Thirdly, plant phenological data with tree-rings from south-west Finland since 1750 were examined as a palaeoclimate indicator. The information from the fragmentary, partly overlapping, partly nonsystematically biased plant phenological records of 14 different phenomena were combined into one continuous time series of phenological indices. The indices were found to be reliable indicators of the February to June temperature variations. In contrast, there was no correlation between the phenological indices and the precipitation data. Moreover, the correlations between the studied tree-rings and spring temperatures varied as a function of time and hence, their use in palaeoclimate reconstruction is questionable. The use of present tree-ring datasets for palaeoclimate purposes may become possible after the application of more sophisticated calibration methods. Climate variability since the 18th century is perhaps best seen in the fourth paper study of the multiproxy spring temperature reconstruction of south-west Finland. With the help of transfer functions, an attempt has been made to utilize both documentary and natural proxies. The reconstruction was verified with statistics showing a high degree of validity between the reconstructed and observed temperatures. According to the proxies and modern meteorological observations from Turku, springs have become warmer and have featured a warming trend since around the 1850s. Over the period of 1750 to around 1850, springs featured larger multidecadal low-frequency variability, as well as a smaller range of annual temperature variations. The coldest springtimes occurred around the 1840s and 1850s and the first decade of the 19th century. Particularly warm periods occurred in the 1760s, 1790s, 1820s, 1930s, 1970s and from 1987 onwards, although in this period cold springs occurred, such as the springs of 1994 and 1996. On the basis of the available material, long-term temperature changes have been related to changes in the atmospheric circulation, such as the North Atlantic Oscillation (February-June).
Resumo:
This dissertation consists of an introductory section and three theoretical essays analyzing the interaction of corporate governance and restructuring. The essays adopt an incomplete contracts approach and analyze the role of different institutional designs to facilitate the alignment of the objectives of shareholders and management (or employees) over the magnitude of corporate restructuring. The first essay analyzes how a firm's choice of production technology affects the employees' human capital investment. In the essay, the owners of the firm can choose between a specific and a general technology that both require a costly human capital investment by the employees. The specific technology is initially superior in using the human capital of employees but, in contrast to the general technology, it is not compatible with future innovations. As a result, anticipated changes in the specific technology diminish the ex ante incentives of the employees to invest in human capital unless the shareholders grant the employees specific governance mechanisms (a right of veto, severance pay) so as to protect their investments. The results of the first essay indicate that the level of protection that the shareholders are willing to offer falls short of the socially desirable one. Furthermore, when restructuring opportunities become more abundant, it becomes more attractive both socially and from the viewpoint of the shareholders to initially adopt the general technology. The second essay analyzes how the allocation of authority within the firm interacts with the owners' choice of business strategy when the ability of the owners to monitor the project proposals of the management is biased in favor of the status quo strategy. The essay shows that a bias in the monitoring ability will affect not only the allocation of authority within the firm but also the choice of business strategy. Especially, when delegation has positive managerial incentive effects, delegation turns out to be more attractive under the new business strategy because the improved managerial incentives are a way for the owners to compensate their own reduced information gathering ability. This effect, however, simultaneously makes the owners hesitant to switch the strategy since it would involve a more frequent loss of control over the project choice. Consequently, the owners' lack of knowledge of the new business strategy may lead to a suboptimal choice of strategy. The third essay analyzes the implications of CEO succession process for the ideal board structure. In this essay, the presence of the departing CEO on the board facilitates the ability of the board to find a matching successor and to counsel him. However, the ex-CEO's presence may simultaneously also weaken the ability of the board to restructure since the predecessor may use the opportunity to distort the successor's project choice. The results of the essay suggest that the extent of restructuring gains, the firm's ability to hire good outside directors and the importance of board's advisory role affect at which point and for how long the shareholders may want to nominate the predecessor to the board.
Resumo:
The dissertation consists of an introductory chapter and three essays that apply search-matching theory to study the interaction of labor market frictions, technological change and macroeconomic fluctuations. The first essay studies the impact of capital-embodied growth on equilibrium unemployment by extending a vintage capital/search model to incorporate vintage human capital. In addition to the capital obsolescence (or creative destruction) effect that tends to raise unemployment, vintage human capital introduces a skill obsolescence effect of faster growth that has the opposite sign. Faster skill obsolescence reduces the value of unemployment, hence wages and leads to more job creation and less job destruction, unambiguously reducing unemployment. The second essay studies the effect of skill biased technological change on skill mismatch and the allocation of workers and firms in the labor market. By allowing workers to invest in education, we extend a matching model with two-sided heterogeneity to incorporate an endogenous distribution of high and low skill workers. We consider various possibilities for the cost of acquiring skills and show that while unemployment increases in most scenarios, the effect on the distribution of vacancy and worker types varies according to the structure of skill costs. When the model is extended to incorporate endogenous labor market participation, we show that the unemployment rate becomes less informative of the state of the labor market as the participation margin absorbs employment effects. The third essay studies the effects of labor taxes on equilibrium labor market outcomes and macroeconomic dynamics in a New Keynesian model with matching frictions. Three policy instruments are considered: a marginal tax and a tax subsidy to produce tax progression schemes, and a replacement ratio to account for variability in outside options. In equilibrium, the marginal tax rate and replacement ratio dampen economic activity whereas tax subsidies boost the economy. The marginal tax rate and replacement ratio amplify shock responses whereas employment subsidies weaken them. The tax instruments affect the degree to which the wage absorbs shocks. We show that increasing tax progression when taxation is initially progressive is harmful for steady state employment and output, and amplifies the sensitivity of macroeconomic variables to shocks. When taxation is initially proportional, increasing progression is beneficial for output and employment and dampens shock responses.
Resumo:
One major reason for the global decline of biodiversity is habitat loss and fragmentation. Conservation areas can be designed to reduce biodiversity loss, but as resources are limited, conservation efforts need to be prioritized in order to achieve best possible outcomes. The field of systematic conservation planning developed as a response to opportunistic approaches to conservation that often resulted in biased representation of biological diversity. The last two decades have seen the development of increasingly sophisticated methods that account for information about biodiversity conservation goals (benefits), economical considerations (costs) and socio-political constraints. In this thesis I focus on two general topics related to systematic conservation planning. First, I address two aspects of the question about how biodiversity features should be valued. (i) I investigate the extremely important but often neglected issue of differential prioritization of species for conservation. Species prioritization can be based on various criteria, and is always goal-dependent, but can also be implemented in a scientifically more rigorous way than what is the usual practice. (ii) I introduce a novel framework for conservation prioritization, which is based on continuous benefit functions that convert increasing levels of biodiversity feature representation to increasing conservation value using the principle that more is better. Traditional target-based systematic conservation planning is a special case of this approach, in which a step function is used for the benefit function. We have further expanded the benefit function framework for area prioritization to address issues such as protected area size and habitat vulnerability. In the second part of the thesis I address the application of community level modelling strategies to conservation prioritization. One of the most serious issues in systematic conservation planning currently is not the deficiency of methodology for selection and design, but simply the lack of data. Community level modelling offers a surrogate strategy that makes conservation planning more feasible in data poor regions. We have reviewed the available community-level approaches to conservation planning. These range from simplistic classification techniques to sophisticated modelling and selection strategies. We have also developed a general and novel community level approach to conservation prioritization that significantly improves on methods that were available before. This thesis introduces further degrees of realism into conservation planning methodology. The benefit function -based conservation prioritization framework largely circumvents the problematic phase of target setting, and allowing for trade-offs between species representation provides a more flexible and hopefully more attractive approach to conservation practitioners. The community-level approach seems highly promising and should prove valuable for conservation planning especially in data poor regions. Future work should focus on integrating prioritization methods to deal with multiple aspects in combination influencing the prioritization process, and further testing and refining the community level strategies using real, large datasets.
Resumo:
One of the main aims of evolutionary biology is to explain why organisms vary phenotypically as they do. Proximately, this variation arises from genetic differences and from environmental influences, the latter of which is referred to as phenotypic plasticity. Phenotypic plasticity is thus a central concept in evolutionary biology, and understanding its relative importance in causing the phenotypic variation and differentiation is important, for instance in anticipating the consequences of human induced environmental changes. The aim of this thesis was to study geographic variation and local adaptation, as well as sex ratios and environmental sex reversal, in the common frog (Rana temporaria). These themes cover three different aspects of phenotypic plasticity, which emerges as the central concept for the thesis. The first two chapters address geographic variation and local adaptation in two potentially thermally adaptive traits, namely the degree of melanism and the relative leg length. The results show that although there is an increasing latitudinal trend in the degree of melanism in wild populations across Scandinavian Peninsula, this cline has no direct genetic basis and is thus environmentally induced. The second chapter demonstrates that although there is no linear, latitudinally ordered phenotypic trend in relative leg length that would be expected under Allen s rule an ecogeographical rule linking extremity length to climatic conditions there seems to be such a trend at the genetic level, hidden under environmental effects. The first two chapters thus view phenotypic plasticity through its ecological role and evolution, and demonstrate that it can both give rise to phenotypic variation and hide evolutionary patterns in studies that focus solely on phenotypes. The last three chapters relate to phenotypic plasticity through its ecological and evolutionary role in sex determination, and consequent effects on population sex ratio, genetic recombination and the evolution of sex chromosomes. The results show that while sex ratios are strongly female biased and there is evidence of environmental sex reversals, these reversals are unlikely to have caused the sex ratio skew, at least directly. The results demonstrate that environmental sex reversal can have an effect on the evolution of sex chromosomes, as the recombination patterns between them seem to be controlled by phenotypic, rather than genetic, sex. This potentially allows Y chromosomes to recombine, lending support for the recent hypothesis suggesting that sex-reversal may play an important role on the rejuvenation of Y chromosomes.
Resumo:
The geomagnetic field is one of the most fundamental geophysical properties of the Earth and has significantly contributed to our understanding of the internal structure of the Earth and its evolution. Paleomagnetic and paleointensity data have been crucial in shaping concepts like continental drift, magnetic reversals, as well as estimating the time when the Earth's core and associated geodynamo processes begun. The work of this dissertation is based on reliable Proterozoic and Holocene geomagnetic field intensity data obtained from rocks and archeological artifacts. New archeomagnetic field intensity results are presented for Finland, Estonia, Bulgaria, Italy and Switzerland. The data were obtained using sophisticated laboratory setups as well as various reliability checks and corrections. Inter-laboratory comparisons between three laboratories (Helsinki, Sofia and Liverpool) were performed in order to check the reliability of different paleointensity methods. The new intensity results fill up considerable gaps in the master curves for each region investigated. In order to interpret the paleointensity data of the Holocene period, a novel and user-friendly database (GEOMAGIA50) was constructed. This provided a new tool to independently test the reliability of various techniques and materials used in paleointensity determinations. The results show that archeological artifacts, if well fired, are the most suitable materials. Also lavas yield reliable paleointensity results, although they appear more scattered. This study also shows that reliable estimates are obtained using the Thellier methodology (and its modifications) with reliability checks. Global paleointensity curves during Paleozoic and Proterozoic have several time gaps with few or no intensity data. To define the global intensity behavior of the Earth's magnetic field during these times new rock types (meteorite impact rocks) were investigated. Two case histories are presented. The Ilyinets (Ukraine) impact melt rocks yielded a reliable paleointensity value at 440 Ma (Silurian), whereas the results from Jänisjärvi impact melts (Russian Karelia, ca. 700 Ma) might be biased towards high intensity values because of non-ideal magnetic mineralogy. The features of the geomagnetic field at 1.1 Ga are not well defined due to problems related to reversal asymmetries observed in Keweenawan data of the Lake Superior region. In this work new paleomagnetic, paleosecular variation and paleointensity results are reported from coeval diabases from Central Arizona and help understanding the asymmetry. The results confirm the earlier preliminary observations that the asymmetry is larger in Arizona than in Lake Superior area. Two of the mechanisms proposed to explain the asymmetry remain plausible: the plate motion and the non-dipole influence.
Resumo:
In this thesis we deal with the concept of risk. The objective is to bring together and conclude on some normative information regarding quantitative portfolio management and risk assessment. The first essay concentrates on return dependency. We propose an algorithm for classifying markets into rising and falling. Given the algorithm, we derive a statistic: the Trend Switch Probability, for detection of long-term return dependency in the first moment. The empirical results suggest that the Trend Switch Probability is robust over various volatility specifications. The serial dependency in bear and bull markets behaves however differently. It is strongly positive in rising market whereas in bear markets it is closer to a random walk. Realized volatility, a technique for estimating volatility from high frequency data, is investigated in essays two and three. In the second essay we find, when measuring realized variance on a set of German stocks, that the second moment dependency structure is highly unstable and changes randomly. Results also suggest that volatility is non-stationary from time to time. In the third essay we examine the impact from market microstructure on the error between estimated realized volatility and the volatility of the underlying process. With simulation-based techniques we show that autocorrelation in returns leads to biased variance estimates and that lower sampling frequency and non-constant volatility increases the error variation between the estimated variance and the variance of the underlying process. From these essays we can conclude that volatility is not easily estimated, even from high frequency data. It is neither very well behaved in terms of stability nor dependency over time. Based on these observations, we would recommend the use of simple, transparent methods that are likely to be more robust over differing volatility regimes than models with a complex parameter universe. In analyzing long-term return dependency in the first moment we find that the Trend Switch Probability is a robust estimator. This is an interesting area for further research, with important implications for active asset allocation.
Resumo:
Modern-day economics is increasingly biased towards believing that institutions matter for growth, an argument that has been further enforced by the recent economic crisis. There is also a wide consensus on what these growth-promoting institutions should look like, and countries are periodically ranked depending on how their institutional structure compares with the best-practice institutions, mostly in place in the developing world. In this paper, it is argued that ”non-desirable” or “second-best” institutions can be beneficial for fostering investment and thus providing a starting point for sustained growth, and that what matters is the appropriateness of institutions to the economy’s distance to the frontier or current phase of development. Anecdotal evidence from Japan and South-Korea is used as a motivation for studying the subject and a model is presented to describe this phenomenon. In the model, the rigidity or non-rigidity of the institutions is described by entrepreneurial selection. It is assumed that entrepreneurs are the ones taking part in the imitation and innovation of technologies, and that decisions on whether or not their projects are refinanced comes from capitalists. The capitalists in turn have no entrepreneurial skills and act merely as financers of projects. The model has two periods, and two kinds of entrepreneurs: those with high skills and those with low skills. The society’s choice of whether an imitation or innovation – based strategy is chosen is modeled as the trade-off between refinancing a low-skill entrepreneur or investing in the selection of the entrepreneurs resulting in a larger fraction of high-skill entrepreneurs with the ability to innovate but less total investment. Finally, a real-world example from India is presented as an initial attempt to test the theory. The data from the example is not included in this paper. It is noted that the model may be lacking explanatory power due to difficulties in testing the predictions, but that this should not be seen as a reason to disregard the theory – the solution might lie in developing better tools, not better just better theories. The conclusion presented is that institutions do matter. There is no one-size-fits-all-solution when it comes to institutional arrangements in different countries, and developing countries should be given space to develop their own institutional structures that cater to their specific needs.
Resumo:
This study deals with how ethnic minorities and immigrants are portrayed in the Finnish print media. The study also asks how media users of various ethnocultural backgrounds make sense of these mediated stories. A more general objective is to elucidate negotiations of belonging and positioning practices in an increasingly complex society. The empirical part of the study is based on content analysis and qualitative close reading of 1,782 articles in five newspapers (Hufvudstadsbladet, Vasabladet, Helsingin Sanomat, Iltalehti and Ilta-Sanomat) during various research periods between 1999 and 2007. Four case studies on print media content are followed up by a focus group study involving 33 newspaper readers of Bosnian, Somalian, Russian, and 'native' Finnish backgrounds. The study draws from different academic and intellectual traditions; mainly media and communication studies, sociology and social psychology. The main theoretical framework employed is positioning theory, as developed by Rom Harré and others. Building on this perspective, situational self-positioning, positioning by others, and media positioning are seen as central practices in the negotiation of belonging. In support of contemporary developments in social sciences, some of these negotiations are seen as occurring in a network type of communicative space. In this space, the media form one of the most powerful institutions in constructing, distributing and legitimising values and ideas of who belongs to 'us', and who does not. The notion of positioning always involves an exclusionary potential. This thesis joins scholars who assert that in order to understand inclusionary and exclusionary mechanisms, the theoretical starting point must be a recognition of a decent and non-humiliating society. When key insights are distilled from the five empirical cases and related to the main theories, one of the major arguments put forward is that the media were first and foremost concerned with a minority actor's rightful or unlawful belonging to the Finnish welfare system. However, in some cases persistent stereotypes concerning some immigrant groups' motivation to work, pay taxes and therefore contribute are so strong that a general idea of individualism is forgotten in favour of racialised and stagnated views. Discussants of immigrant background also claim that the positions provided for minority actors in the media are not easy to identify with; categories are too narrow, journalists are biased, the reporting is simplifying and carries labelling potential. Hence, although the will for the communicative space to be more diverse and inclusive exists — and has also in many cases been articulated in charters, acts and codes — the positioning of ethnic minorities and immigrants differs significantly from the ideal.
Resumo:
The distinction between a priori and a posteriori knowledge has been the subject of an enormous amount of discussion, but the literature is biased against recognizing the intimate relationship between these forms of knowledge. For instance, it seems to be almost impossible to find a sample of pure a priori or a posteriori knowledge. In this paper it will be suggested that distinguishing between a priori and a posteriori is more problematic than is often suggested, and that a priori and a posteriori resources are in fact used in parallel. We will define this relationship between a priori and a posteriori knowledge as the bootstrapping relationship. As we will see, this relationship gives us reasons to seek for an altogether novel definition of a priori and a posteriori knowledge. Specifically, we will have to analyse the relationship between a priori knowledge and a priori reasoning, and it will be suggested that the latter serves as a more promising starting point for the analysis of aprioricity. We will also analyse a number of examples from the natural sciences and consider the role of a priori reasoning in these examples. The focus of this paper is the analysis of the concepts of a priori and a posteriori knowledge rather than the epistemic domain of a posteriori and a priori justification.
Resumo:
Epidemiological studies have shown an elevation in the incidence of asthma, allergic symptoms and respiratory infections among people living or working in buildings with moisture and mould problems. Microbial growth is suspected to have a key role, since the severity of microbial contamination and symptoms show a positive correlation, while the removal of contaminated materials relieves the symptoms. However, the cause-and-effect relationship has not been well established and knowledge of the causative agents is incomplete. The present consensus of indoor microbes relies on culture-based methods. Microbial cultivation and identification is known to provide qualitatively and quantitatively biased results, which is suspected to be one of the reasons behind the often inconsistent findings between objectively measured microbiological attributes and health. In the present study the indoor microbial communities were assessed using culture-independent, DNA based methods. Fungal and bacterial diversity was determined by amplifying and sequencing the nucITS- and16S-gene regions, correspondingly. In addition, the cell equivalent numbers of 69 mould species or groups were determined by quantitative PCR (qPCR). The results from molecular analyses were compared with results obtained using traditional plate cultivation for fungi. Using DNA-based tools, the indoor microbial diversity was found to be consistently higher and taxonomically wider than viable diversity. The dominant sequence types of fungi, and also of bacteria were mainly affiliated with well-known microbial species. However, in each building they were accompanied by various rare, uncultivable and unknown species. In both moisture-damaged and undamaged buildings the dominant fungal sequence phylotypes were affiliated with the classes Dothideomycetes (mould-like filamentous ascomycetes); Agaricomycetes (mushroom- and polypore-like filamentous basidiomycetes); Urediniomycetes (rust-like basidiomycetes); Tremellomycetes and the family Malasseziales (both yeast-like basidiomycetes). The most probable source for the majority of fungal types was the outdoor environment. In contrast, the dominant bacterial phylotypes in both damaged and undamaged buildings were affiliated with human-associated members within the phyla Actinobacteria and Firmicutes. Indications of elevated fungal diversity within potentially moisture-damage-associated fungal groups were recorded in two of the damaged buildings, while one of the buildings was characterized by an abundance of members of the Penicillium chrysogenum and P. commune species complexes. However, due to the small sample number and strong normal variation firm conclusions concerning the effect of moisture damage on the species diversity could not be made. The fungal communities in dust samples showed seasonal variation, which reflected the seasonal fluctuation of outdoor fungi. Seasonal variation of bacterial communities was less clear but to some extent attributable to the outdoor sources as well. The comparison of methods showed that clone library sequencing was a feasible method for describing the total microbial diversity, indicated a moderate quantitative correlation between sequencing and qPCR results and confirmed that culture based methods give both a qualitative and quantitative underestimate of microbial diversity in the indoor environment. However, certain important indoor fungi such as Penicillium spp. were clearly underrepresented in the sequence material, probably due to their physiological and genetic properties. Species specific qPCR was a more efficient and sensitive method for detecting and quantitating individual species than sequencing, but in order to exploit the full advantage of the method in building investigations more information is needed about the microbial species growing on damaged materials. In the present study, a new method was also developed for enhanced screening of the marker gene clone libraries. The suitability of the screening method to different kinds of microbial environments including biowaste compost material and indoor settled dusts was evaluated. The usability was found to be restricted to environments that support the growth and subsequent dominance of a small number microbial species, such as compost material.