937 resultados para Text analysis
Resumo:
This work belongs to the field of computational high-energy physics (HEP). The key methods used in this thesis work to meet the challenges raised by the Large Hadron Collider (LHC) era experiments are object-orientation with software engineering, Monte Carlo simulation, the computer technology of clusters, and artificial neural networks. The first aspect discussed is the development of hadronic cascade models, used for the accurate simulation of medium-energy hadron-nucleus reactions, up to 10 GeV. These models are typically needed in hadronic calorimeter studies and in the estimation of radiation backgrounds. Various applications outside HEP include the medical field (such as hadron treatment simulations), space science (satellite shielding), and nuclear physics (spallation studies). Validation results are presented for several significant improvements released in Geant4 simulation tool, and the significance of the new models for computing in the Large Hadron Collider era is estimated. In particular, we estimate the ability of the Bertini cascade to simulate Compact Muon Solenoid (CMS) hadron calorimeter HCAL. LHC test beam activity has a tightly coupled cycle of simulation-to-data analysis. Typically, a Geant4 computer experiment is used to understand test beam measurements. Thus an another aspect of this thesis is a description of studies related to developing new CMS H2 test beam data analysis tools and performing data analysis on the basis of CMS Monte Carlo events. These events have been simulated in detail using Geant4 physics models, full CMS detector description, and event reconstruction. Using the ROOT data analysis framework we have developed an offline ANN-based approach to tag b-jets associated with heavy neutral Higgs particles, and we show that this kind of NN methodology can be successfully used to separate the Higgs signal from the background in the CMS experiment.
Resumo:
Atmospheric aerosol particle formation events can be a significant source for tropospheric aerosols and thus influence the radiative properties and cloud cover of the atmosphere. This thesis investigates the analysis of aerosol size distribution data containing particle formation events, describes the methodology of the analysis and presents time series data measured inside the Boreal forest. This thesis presents a methodology to identify regional-scale particle formation, and to derive the basic characteristics such as growth and formation rates. The methodology can also be used to estimate concentration and source rates of the vapour causing particle growth. Particle formation was found to occur frequently in the boreal forest area over areas covering up to hundreds of kilometers. Particle formation rates of boreal events were found to be of the order of 0.01-5 cm^-3 s^-1, while the nucleation rates of 1 nm particles can be a few orders of magnitude higher. The growth rates of over 3 nm sized particles were of the order of a few nanometers per hour. The vapor concentration needed to sustain such growth is of the order of 10^7--10^8 cm^-3, approximately one order of magnitude higher than sulphuric acid concentrations found in the atmosphere. Therefore, one has to assume that other vapours, such as organics, have a key role in growing newborn particles to sizes where they can become climatically active. Formation event occurrence shows a clear annual variation with peaks in summer and autumns. This variation is similar to the variation exhibited the obtained formation rates of particles. The growth rate, on the other hand, reaches its highest values during summer. This difference in the annual behavior, and the fact that no coupling between the growth and formation process could be identified, suggest that these processes might be different ones, and that both are needed for a particle formation burst to be observed.
Resumo:
The first quarter of the 20th century witnessed a rebirth of cosmology, study of our Universe, as a field of scientific research with testable theoretical predictions. The amount of available cosmological data grew slowly from a few galaxy redshift measurements, rotation curves and local light element abundances into the first detection of the cos- mic microwave background (CMB) in 1965. By the turn of the century the amount of data exploded incorporating fields of new, exciting cosmological observables such as lensing, Lyman alpha forests, type Ia supernovae, baryon acoustic oscillations and Sunyaev-Zeldovich regions to name a few. -- CMB, the ubiquitous afterglow of the Big Bang, carries with it a wealth of cosmological information. Unfortunately, that information, delicate intensity variations, turned out hard to extract from the overall temperature. Since the first detection, it took nearly 30 years before first evidence of fluctuations on the microwave background were presented. At present, high precision cosmology is solidly based on precise measurements of the CMB anisotropy making it possible to pinpoint cosmological parameters to one-in-a-hundred level precision. The progress has made it possible to build and test models of the Universe that differ in the way the cosmos evolved some fraction of the first second since the Big Bang. -- This thesis is concerned with the high precision CMB observations. It presents three selected topics along a CMB experiment analysis pipeline. Map-making and residual noise estimation are studied using an approach called destriping. The studied approximate methods are invaluable for the large datasets of any modern CMB experiment and will undoubtedly become even more so when the next generation of experiments reach the operational stage. -- We begin with a brief overview of cosmological observations and describe the general relativistic perturbation theory. Next we discuss the map-making problem of a CMB experiment and the characterization of residual noise present in the maps. In the end, the use of modern cosmological data is presented in the study of an extended cosmological model, the correlated isocurvature fluctuations. Current available data is shown to indicate that future experiments are certainly needed to provide more information on these extra degrees of freedom. Any solid evidence of the isocurvature modes would have a considerable impact due to their power in model selection.
Resumo:
Accelerator mass spectrometry (AMS) is an ultrasensitive technique for measuring the concentration of a single isotope. The electric and magnetic fields of an electrostatic accelerator system are used to filter out other isotopes from the ion beam. The high velocity means that molecules can be destroyed and removed from the measurement background. As a result, concentrations down to one atom in 10^16 atoms are measurable. This thesis describes the construction of the new AMS system in the Accelerator Laboratory of the University of Helsinki. The system is described in detail along with the relevant ion optics. System performance and some of the 14C measurements done with the system are described. In a second part of the thesis, a novel statistical model for the analysis of AMS data is presented. Bayesian methods are used in order to make the best use of the available information. In the new model, instrumental drift is modelled with a continuous first-order autoregressive process. This enables rigorous normalization to standards measured at different times. The Poisson statistical nature of a 14C measurement is also taken into account properly, so that uncertainty estimates are much more stable. It is shown that, overall, the new model improves both the accuracy and the precision of AMS measurements. In particular, the results can be improved for samples with very low 14C concentrations or measured only a few times.
Resumo:
A torsional MEMS varactor with wide dynamic range, lower actuation voltage and isolation between actuation voltage and signal voltage has been proposed in C. Venkatesh et al. (2005). In this paper we address the effects of pull-in, residual stress and continuous cycling on the performance of torsional MEMS varactor.
Resumo:
This study offers a reconstruction and critical evaluation of globalization theory, a perspective that has been central for sociology and cultural studies in recent decades, from the viewpoint of media and communications. As the study shows, sociological and cultural globalization theorists rely heavily on arguments concerning media and communications, especially the so-called new information and communication technologies, in the construction of their frameworks. Together with deepening the understanding of globalization theory, the study gives new critical knowledge of the problematic consequences that follow from such strong investment in media and communications in contemporary theory. The book is divided into four parts. The first part presents the research problem, the approach and the theoretical contexts of the study. Followed by the introduction in Chapter 1, I identify the core elements of globalization theory in Chapter 2. At the heart of globalization theory is the claim that recent decades have witnessed massive changes in the spatio-temporal constitution of society, caused by new media and communications in particular, and that these changes necessitate the rethinking of the foundations of social theory as a whole. Chapter 3 introduces three paradigms of media research the political economy of media, cultural studies and medium theory the discussion of which will make it easier to understand the key issues and controversies that emerge in academic globalization theorists treatment of media and communications. The next two parts offer a close reading of four theorists whose works I use as entry points into academic debates on globalization. I argue that we can make sense of mainstream positions on globalization by dividing them into two paradigms: on the one hand, media-technological explanations of globalization and, on the other, cultural globalization theory. As examples of the former, I discuss the works of Manuel Castells (Chapter 4) and Scott Lash (Chapter 5). I maintain that their analyses of globalization processes are overtly media-centric and result in an unhistorical and uncritical understanding of social power in an era of capitalist globalization. A related evaluation of the second paradigm (cultural globalization theory), as exemplified by Arjun Appadurai and John Tomlinson, is presented in Chapter 6. I argue that due to their rejection of the importance of nation states and the notion of cultural imperialism for cultural analysis, and their replacement with a framework of media-generated deterritorializations and flows, these theorists underplay the importance of the neoliberalization of cultures throughout the world. The fourth part (Chapter 7) presents a central research finding of this study, namely that the media-centrism of globalization theory can be understood in the context of the emergence of neoliberalism. I find it problematic that at the same time when capitalist dynamics have been strengthened in social and cultural life, advocates of globalization theory have directed attention to media-technological changes and their sweeping socio-cultural consequences, instead of analyzing the powerful material forces that shape the society and the culture. I further argue that this shift serves not only analytical but also utopian functions, that is, the longing for a better world in times when such longing is otherwise considered impracticable.
Resumo:
This study analyses personal relationships linking research to sociological theory on the questions of the social bond and on the self as social. From the viewpoint of disruptive life events and experiences, such as loss, divorce and illness, it aims at understanding how selves are bound to their significant others as those specific people ‘close or otherwise important’ to them. Who form the configurations of significant others? How do different bonds respond in disruptions and how do relational processes unfold? How is the embeddedness of selves manifested in the processes of bonding, on the one hand, and in the relational formation of the self, on the other? The bonds are analyzed from an anti-categorical viewpoint based on personal citations of significance as opposed to given relationship categories, such as ‘family’ or ‘friendship’ – the two kinds of relationships that in fact are most frequently significant. The study draws from analysis of the personal narratives of 37 Finnish women and men (in all 80 interviews) and their entire configurations of those specific people who they cite as ‘close or otherwise important’. The analysis stresses the subjective experiences, while also investigating the actualized relational processes and configurations of all personal relationships with certain relationship histories embedded in micro-level structures. The research is based on four empirical sub-studies of personal relationships and a summary discussing the questions of the self and social bond. Discussion draws from G. H. Mead, C. Cooley, N. Elias, T. Scheff, G. Simmel and the contributors of ‘relational sociology’. Sub-studies analyse bonds to others from the viewpoint of biographical disruption and re-configuration of significant others, estranged family bonds, peer support and the formation of the most intimate relationships into exclusive and inclusive configurations. All analyses examine the dialectics of the social and the personal, asking how different structuring mechanisms and personal experiences and negotiations together contribute to the unfolding of the bonds. The summary elaborates personal relationships as social bonds embedded in wider webs of interdependent people and social settings that are laden with cultural expectations. Regarding the question of the relational self, the study proposes both bonding and individuality as significant. They are seen as interdependent phases of the relationality of the self. Bonding anchors the self to its significant relationships, in which individuality is manifested, for example, in contrasting and differentiating dynamics, but also in active attempts to connect with others. Individuality is not a fixed quality of the self, but a fluid and interdependent phase of the relational self. More specifically, it appears in three formats in the flux of relational processes: as a sense of unique self (via cultivation of subjective experiences), as agency and as (a search for) relative autonomy. The study includes an epilogue addressing the ambivalence between the social expectation of individuality in society and the bonded reality of selves.
Resumo:
The purpose of this study is to examine how transformation is defining feminist bioethics and to determine the nature of this transformation. Behind the quest for transformation is core feminism and its political implications, namely, that women and other marginalized groups have been given unequal consideration in society and the sciences and that this situation is unacceptable and should be remedied. The goal of the dissertation is to determine how feminist bioethicists integrate the transformation into their respective fields and how they apply the potential of feminism to bioethical theories and practice. On a theoretical level, feminist bioethicists wish to reveal how current ways of knowing are based on inequality. Feminists pay special attention especially to communal and political contexts and to the power relations endorsed by each community. In addition, feminist bioethicists endorse relational ethics, a relational account of the self in which the interconnectedness of persons is important. On the conceptual level, feminist bioethicists work with beliefs, concepts, and practices that give us our world. As an example, I examine how feminist bioethicists have criticized and redefined the concept of autonomy. Feminist bioethicists emphasize relational autonomy, which is based on the conviction that social relationships shape moral identities and values. On the practical level, I discuss stem cell research as a test case for feminist bioethics and its ability to employ its methodologies. Analyzing these perspectives allowed me first, to compare non-feminist and feminist accounts of stem cell ethics and, second, to analyze feminist perspectives on the novel biotechnology. Along with offering a critical evaluation of the stem cell debate, the study shows that sustainable stem cell policies should be grounded on empirical knowledge about how donors perceive stem cell research and the donation process. The study indicates that feminist bioethics should develop the use of empirical bioethics, which takes the nature of ethics seriously: ethical decisions are provisional and open for further consideration. In addition, the study shows that there is another area of development in feminist bioethics: the understanding of (moral) agency. I argue that agency should be understood to mean that actions create desires.
Resumo:
Despite the acknowledged importance of strategic planning in business and other organizations, there are few studies focusing on strategy texts and the related processes of their production and consumption. In this paper, we attempt to partially fill this research gap by examining the institutionalized aspects of strategy discourse: what strategy is as genre. Combining textual analysis and analysis of conversation, the article focuses on the official strategy of the City of Lahti in Finland. Our analysis shows how specific communicative purposes and lexico-grammatical features characterize the genre of strategy and how the actual negotiations over strategy text involve particular kinds of intersubjectivity and intertextuality.
Angel Investing in Finland: An Analysis Based on Agency Theory and the Incomplete Contracting Theory
Resumo:
Wealthy individuals - business angels who invest a share of their net worth in entrepreneurial ventures - form an essential part of an informal venture capital market that can secure funding for entrepreneurial ventures. In Finland, business angels represent an untapped pool of capital that can contribute to fostering entrepreneurial development. In addition, business angels can bridge knowledge gaps in new business ventures by means of making their human capital available. This study has two objectives. The first is to gain an understanding of the characteristics and investment behaviour of Finnish business angels. The strongest focus here is on the due diligence procedures and their involvement post investment. The second objective is to assess whether agency theory and the incomplete contacting theory are useful theoretical lenses in the arena of business angels. To achieve the second objective, this study investigates i) how risk is mitigated in the investment process, ii) how uncertainty influences the comprehensiveness of due diligence as well as iii) how control is allocated post investment. Research hypotheses are derived from assumptions underlying agency theory and the incomplete contacting theory. The data for this study comprise interviews with 53 business angels. In terms of sample size this is the largest on Finnish business angels. The research hypotheses in this study are tested using regression analysis. This study suggests that the Finnish informal venture capital market appears to be comprised of a limited number of business angels whose style of investing much resembles their formal counterparts’. Much focus is placed on managing risks prior to making the investment by strong selectiveness and by a relatively comprehensive due diligence. The involvement is rarely on a day-to-day basis and many business angels seem to see board membership as a more suitable alternative than involvement in the operations of an entrepreneurial venture. The uncertainty involved does not seem to drive an increase in due diligence. On the contrary, it would appear that due diligence is more rigorous in safer later stage investments and when the business angels have considerable previous experience as investors. Finnish business angels’ involvement post investment is best explained by their degree of ownership in the entrepreneurial venture. It seems that when investors feel they are sufficiently rewarded, in terms of an adequate equity stake, they are willing to involve themselves actively in their investments. The lack of support for a relationship between increased uncertainty and the comprehensiveness of due diligence may partly be explained by an increasing trend towards portfolio diversification. This is triggered by a taxation system that favours investments through investment companies rather than direct investments. Many business angels appear to have substituted a specialization strategy that builds on reducing uncertainty for a diversification strategy that builds on reducing firm specific (idiosyncratic) risk by holding shares in ventures whose returns are not expected to exhibit a strong positive correlation.
Resumo:
The objective of this paper is to improve option risk monitoring by examining the information content of implied volatility and by introducing the calculation of a single-sum expected risk exposure similar to the Value-at-Risk. The figure is calculated in two steps. First, there is a need to estimate the value of a portfolio of options for a number of different market scenarios, while the second step is to summarize the information content of the estimated scenarios into a single-sum risk measure. This involves the use of probability theory and return distributions, which confronts the user with the problems of non-normality in the return distribution of the underlying asset. Here the hyperbolic distribution is used to describe one alternative for dealing with heavy tails. Results indicate that the information content of implied volatility is useful when predicting future large returns in the underlying asset. Further, the hyperbolic distribution provides a good fit to historical returns enabling a more accurate definition of statistical intervals and extreme events.
Resumo:
The objective of this paper is to suggest a method that accounts for the impact of the volatility smile dynamics when performing scenario analysis for a portfolio consisting of vanilla options. As the volatility smile is documented to change at least with the level of implied at-the-money volatility, a suitable model is here included in the calculation process of the simulated market scenarios. By constructing simple portfolios of index options and comparing the ex ante risk exposure measured using different pricing methods to realized market values, ex post, the improvements of the incorporation of the model are monitored. The analyzed examples in the study generate results that statistically support that the most accurate scenarios are those calculated using the model accounting for the dynamics of the smile. Thus, we show that the differences emanating from the volatility smile are apparent and should be accounted for and that the methodology presented herein is one suitable alternative for doing so.
Resumo:
Extraction of text areas from the document images with complex content and layout is one of the challenging tasks. Few texture based techniques have already been proposed for extraction of such text blocks. Most of such techniques are greedy for computation time and hence are far from being realizable for real time implementation. In this work, we propose a modification to two of the existing texture based techniques to reduce the computation. This is accomplished with Harris corner detectors. The efficiency of these two textures based algorithms, one based on Gabor filters and other on log-polar wavelet signature, are compared. A combination of Gabor feature based texture classification performed on a smaller set of Harris corner detected points is observed to deliver the accuracy and efficiency.
Resumo:
This paper investigates the effect of income inequality on health status. A model of health status was specified in which the main variables were income level, income inequality, the level of savings and the level of education. The model was estimated using a panel data set for 44 countries covering six time periods. The results indicate that income inequality (measured by the Gini coefficient) has a significant effect on health status when we control for the levels of income, savings and education. The relationship is consistent regardless of the specification of health status and income. Thus, the study results provide some empirical support for the income inequality hypothesis.