43 resultados para Zero-One Matrices
Resumo:
We present the result of a search for a massive color-octet vector particle, (e.g. a massive gluon) decaying to a pair of top quarks in proton-antiproton collisions with a center-of-mass energy of 1.96 TeV. This search is based on 1.9 fb$^{-1}$ of data collected using the CDF detector during Run II of the Tevatron at Fermilab. We study $t\bar{t}$ events in the lepton+jets channel with at least one $b$-tagged jet. A massive gluon is characterized by its mass, decay width, and the strength of its coupling to quarks. These parameters are determined according to the observed invariant mass distribution of top quark pairs. We set limits on the massive gluon coupling strength for masses between 400 and 800 GeV$/c^2$ and width-to-mass ratios between 0.05 and 0.50. The coupling strength of the hypothetical massive gluon to quarks is consistent with zero within the explored parameter space.
Resumo:
Starting point in the European individualistic copyright ideology is that an individual author creates a work and controls the use of it. However, this paper argues that it is (and has always been) impossible to control the use of works after their publication. This has also been acknowledged by the legislator, who has introduced collective licensing agreements because of this impossibility. Since it is impossible to rigorously control the use of works this writing "Rough Justice or Zero Tolerance - Reassessing the Nature of Copyright in Light of Collective Licensing" examines what reality of copyright is actually about. Finding alternative (and hopefully more "true") ways to understand copyright helps us to create alternative solutions in order to solve possible problems we have as it comes e.g. to use of content in online environment. The paper makes a claim that copyright is actually about defining negotiation points for different stakeholders and that nothing in the copyright reality prevents us from defining e.g. a new negotiation point where representatives of consumers would meet representatives of right holders in order to agree on the terms of use for certain content types in online environment.
Resumo:
Despite thirty years of research in interorganizational networks and project business within the industrial networks approach and relationship marketing, collective capability of networks of business and other interorganizational actors has not been explicitly conceptualized and studied within the above-named approaches. This is despite the fact that the two approaches maintain that networking is one of the core strategies for the long-term survival of market actors. Recently, many scholars within the above-named approaches have emphasized that the survival of market actors is based on the strength of their networks and that inter-firm competition is being replaced by inter-network competition. Furthermore, project business is characterized by the building of goal-oriented, temporary networks whose aims, structures, and procedures are clarified and that are governed by processes of interaction as well as recurrent contracts. This study develops frameworks for studying and analysing collective network capability, i.e. collective capability created for the network of firms. The concept is first justified and positioned within the industrial networks, project business, and relationship marketing schools. An eclectic source of conceptual input is based on four major approaches to interorganizational business relationships. The study uses qualitative research and analysis, and the case report analyses the empirical phenomenon using a large number of qualitative techniques: tables, diagrams, network models, matrices etc. The study shows the high level of uniqueness and complexity of international project business. While perceived psychic distance between the parties may be small due to previous project experiences and the benefit of existing relationships, a varied number of critical events develop due to the economic and local context of the recipient country as well as the coordination demands of the large number of involved actors. The study shows that the successful creation of collective network capability led to the success of the network for the studied project. The processes and structures for creating collective network capability are encapsulated in a model of governance factors for interorganizational networks. The theoretical and management implications are summarized in seven propositions. The core implication is that project business success in unique and complex environments is achieved by accessing the capabilities of a network of actors, and project management in such environments should be built on both contractual and cooperative procedures with local recipient country parties.
Resumo:
The purpose of this study was to deepen the understanding of market segmentation theory by studying the evolution of the concept and by identifying the antecedents and consequences of the theory. The research method was influenced by content analysis and meta-analysis. The evolution of market segmentation theory was studied as a reflection of evolution of marketing theory. According to this study, the theory of market segmentation has its roots in microeconomics and it has been influenced by different disciplines, such as motivation research and buyer behaviour theory. Furthermore, this study suggests that the evolution of market segmentation theory can be divided into four major eras: the era of foundations, development and blossoming, stillness and stagnation, and the era of re-emergence. Market segmentation theory emerged in the mid-1950’s and flourished during the period between mid-1950’s and the late 1970’s. During the 1980’s the theory lost its interest in the scientific community and no significant contributions were made. Now, towards the dawn of the new millennium, new approaches have emerged and market segmentation has gained new attention.
Resumo:
We study the energy current in a model of heat conduction, first considered in detail by Casher and Lebowitz. The model consists of a one-dimensional disordered harmonic chain of n i.i.d. random masses, connected to their nearest neighbors via identical springs, and coupled at the boundaries to Langevin heat baths, with respective temperatures T_1 and T_n. Let EJ_n be the steady-state energy current across the chain, averaged over the masses. We prove that EJ_n \sim (T_1 - T_n)n^{-3/2} in the limit n \to \infty, as has been conjectured by various authors over the time. The proof relies on a new explicit representation for the elements of the product of associated transfer matrices.
Resumo:
Modern-day economics is increasingly biased towards believing that institutions matter for growth, an argument that has been further enforced by the recent economic crisis. There is also a wide consensus on what these growth-promoting institutions should look like, and countries are periodically ranked depending on how their institutional structure compares with the best-practice institutions, mostly in place in the developing world. In this paper, it is argued that ”non-desirable” or “second-best” institutions can be beneficial for fostering investment and thus providing a starting point for sustained growth, and that what matters is the appropriateness of institutions to the economy’s distance to the frontier or current phase of development. Anecdotal evidence from Japan and South-Korea is used as a motivation for studying the subject and a model is presented to describe this phenomenon. In the model, the rigidity or non-rigidity of the institutions is described by entrepreneurial selection. It is assumed that entrepreneurs are the ones taking part in the imitation and innovation of technologies, and that decisions on whether or not their projects are refinanced comes from capitalists. The capitalists in turn have no entrepreneurial skills and act merely as financers of projects. The model has two periods, and two kinds of entrepreneurs: those with high skills and those with low skills. The society’s choice of whether an imitation or innovation – based strategy is chosen is modeled as the trade-off between refinancing a low-skill entrepreneur or investing in the selection of the entrepreneurs resulting in a larger fraction of high-skill entrepreneurs with the ability to innovate but less total investment. Finally, a real-world example from India is presented as an initial attempt to test the theory. The data from the example is not included in this paper. It is noted that the model may be lacking explanatory power due to difficulties in testing the predictions, but that this should not be seen as a reason to disregard the theory – the solution might lie in developing better tools, not better just better theories. The conclusion presented is that institutions do matter. There is no one-size-fits-all-solution when it comes to institutional arrangements in different countries, and developing countries should be given space to develop their own institutional structures that cater to their specific needs.
Resumo:
This study deals with how ethnic minorities and immigrants are portrayed in the Finnish print media. The study also asks how media users of various ethnocultural backgrounds make sense of these mediated stories. A more general objective is to elucidate negotiations of belonging and positioning practices in an increasingly complex society. The empirical part of the study is based on content analysis and qualitative close reading of 1,782 articles in five newspapers (Hufvudstadsbladet, Vasabladet, Helsingin Sanomat, Iltalehti and Ilta-Sanomat) during various research periods between 1999 and 2007. Four case studies on print media content are followed up by a focus group study involving 33 newspaper readers of Bosnian, Somalian, Russian, and 'native' Finnish backgrounds. The study draws from different academic and intellectual traditions; mainly media and communication studies, sociology and social psychology. The main theoretical framework employed is positioning theory, as developed by Rom Harré and others. Building on this perspective, situational self-positioning, positioning by others, and media positioning are seen as central practices in the negotiation of belonging. In support of contemporary developments in social sciences, some of these negotiations are seen as occurring in a network type of communicative space. In this space, the media form one of the most powerful institutions in constructing, distributing and legitimising values and ideas of who belongs to 'us', and who does not. The notion of positioning always involves an exclusionary potential. This thesis joins scholars who assert that in order to understand inclusionary and exclusionary mechanisms, the theoretical starting point must be a recognition of a decent and non-humiliating society. When key insights are distilled from the five empirical cases and related to the main theories, one of the major arguments put forward is that the media were first and foremost concerned with a minority actor's rightful or unlawful belonging to the Finnish welfare system. However, in some cases persistent stereotypes concerning some immigrant groups' motivation to work, pay taxes and therefore contribute are so strong that a general idea of individualism is forgotten in favour of racialised and stagnated views. Discussants of immigrant background also claim that the positions provided for minority actors in the media are not easy to identify with; categories are too narrow, journalists are biased, the reporting is simplifying and carries labelling potential. Hence, although the will for the communicative space to be more diverse and inclusive exists — and has also in many cases been articulated in charters, acts and codes — the positioning of ethnic minorities and immigrants differs significantly from the ideal.
Resumo:
The Earth s climate is a highly dynamic and complex system in which atmospheric aerosols have been increasingly recognized to play a key role. Aerosol particles affect the climate through a multitude of processes, directly by absorbing and reflecting radiation and indirectly by changing the properties of clouds. Because of the complexity, quantification of the effects of aerosols continues to be a highly uncertain science. Better understanding of the effects of aerosols requires more information on aerosol chemistry. Before the determination of aerosol chemical composition by the various available analytical techniques, aerosol particles must be reliably sampled and prepared. Indeed, sampling is one of the most challenging steps in aerosol studies, since all available sampling techniques harbor drawbacks. In this study, novel methodologies were developed for sampling and determination of the chemical composition of atmospheric aerosols. In the particle-into-liquid sampler (PILS), aerosol particles grow in saturated water vapor with further impaction and dissolution in liquid water. Once in water, the aerosol sample can then be transported and analyzed by various off-line or on-line techniques. In this study, PILS was modified and the sampling procedure was optimized to obtain less altered aerosol samples with good time resolution. A combination of denuders with different coatings was tested to adsorb gas phase compounds before PILS. Mixtures of water with alcohols were introduced to increase the solubility of aerosols. Minimum sampling time required was determined by collecting samples off-line every hour and proceeding with liquid-liquid extraction (LLE) and analysis by gas chromatography-mass spectrometry (GC-MS). The laboriousness of LLE followed by GC-MS analysis next prompted an evaluation of solid-phase extraction (SPE) for the extraction of aldehydes and acids in aerosol samples. These two compound groups are thought to be key for aerosol growth. Octadecylsilica, hydrophilic-lipophilic balance (HLB), and mixed phase anion exchange (MAX) were tested as extraction materials. MAX proved to be efficient for acids, but no tested material offered sufficient adsorption for aldehydes. Thus, PILS samples were extracted only with MAX to guarantee good results for organic acids determined by liquid chromatography-mass spectrometry (HPLC-MS). On-line coupling of SPE with HPLC-MS is relatively easy, and here on-line coupling of PILS with HPLC-MS through the SPE trap produced some interesting data on relevant acids in atmospheric aerosol samples. A completely different approach to aerosol sampling, namely, differential mobility analyzer (DMA)-assisted filter sampling, was employed in this study to provide information about the size dependent chemical composition of aerosols and understanding of the processes driving aerosol growth from nano-size clusters to climatically relevant particles (>40 nm). The DMA was set to sample particles with diameters of 50, 40, and 30 nm and aerosols were collected on teflon or quartz fiber filters. To clarify the gas-phase contribution, zero gas-phase samples were collected by switching off the DMA every other 15 minutes. Gas-phase compounds were adsorbed equally well on both types of filter, and were found to contribute significantly to the total compound mass. Gas-phase adsorption is especially significant during the collection of nanometer-size aerosols and needs always to be taken into account. Other aims of this study were to determine the oxidation products of β-caryophyllene (the major sesquiterpene in boreal forest) in aerosol particles. Since reference compounds are needed for verification of the accuracy of analytical measurements, three oxidation products of β-caryophyllene were synthesized: β-caryophyllene aldehyde, β-nocaryophyllene aldehyde, and β-caryophyllinic acid. All three were identified for the first time in ambient aerosol samples, at relatively high concentrations, and their contribution to the aerosol mass (and probably growth) was concluded to be significant. Methodological and instrumental developments presented in this work enable fuller understanding of the processes behind biogenic aerosol formation and provide new tools for more precise determination of biosphere-atmosphere interactions.
Resumo:
In this article we introduce and evaluate testing procedures for specifying the number k of nearest neighbours in the weights matrix of spatial econometric models. The spatial J-test is used for specification search. Two testing procedures are suggested: an increasing neighbours testing procedure and a decreasing neighbours testing procedure. Simulations show that the increasing neighbours testing procedures can be used in large samples to determine k. The decreasing neighbours testing procedure is found to have low power, and is not recommended for use in practice. An empirical example involving house price data is provided to show how to use the testing procedures with real data.
Resumo:
Reorganizing a dataset so that its hidden structure can be observed is useful in any data analysis task. For example, detecting a regularity in a dataset helps us to interpret the data, compress the data, and explain the processes behind the data. We study datasets that come in the form of binary matrices (tables with 0s and 1s). Our goal is to develop automatic methods that bring out certain patterns by permuting the rows and columns. We concentrate on the following patterns in binary matrices: consecutive-ones (C1P), simultaneous consecutive-ones (SC1P), nestedness, k-nestedness, and bandedness. These patterns reflect specific types of interplay and variation between the rows and columns, such as continuity and hierarchies. Furthermore, their combinatorial properties are interlinked, which helps us to develop the theory of binary matrices and efficient algorithms. Indeed, we can detect all these patterns in a binary matrix efficiently, that is, in polynomial time in the size of the matrix. Since real-world datasets often contain noise and errors, we rarely witness perfect patterns. Therefore we also need to assess how far an input matrix is from a pattern: we count the number of flips (from 0s to 1s or vice versa) needed to bring out the perfect pattern in the matrix. Unfortunately, for most patterns it is an NP-complete problem to find the minimum distance to a matrix that has the perfect pattern, which means that the existence of a polynomial-time algorithm is unlikely. To find patterns in datasets with noise, we need methods that are noise-tolerant and work in practical time with large datasets. The theory of binary matrices gives rise to robust heuristics that have good performance with synthetic data and discover easily interpretable structures in real-world datasets: dialectical variation in the spoken Finnish language, division of European locations by the hierarchies found in mammal occurrences, and co-occuring groups in network data. In addition to determining the distance from a dataset to a pattern, we need to determine whether the pattern is significant or a mere occurrence of a random chance. To this end, we use significance testing: we deem a dataset significant if it appears exceptional when compared to datasets generated from a certain null hypothesis. After detecting a significant pattern in a dataset, it is up to domain experts to interpret the results in the terms of the application.
Resumo:
This is a study on the changing practices of kinship in Northern India. The change in kinship arrangements, and particularly in intermarriage processes, is traced by analysing the reception of Hindi popular cinema. Films and their role and meaning in people´s lives in India was the object of my research. Films also provided me with a methodology for approaching my other subject-matters: family, marriage and love. Through my discussion of cultural change, the persistence of family as a core value and locus of identity, and the movie discourses depicting this dialogue, I have looked for a possibility of compromise and reconciliation in an Indian context. As the primary form of Indian public culture, cinema has the ability to take part in discourses about Indian identity and cultural change, and alleviate the conflicts that emerge within these discourses. Hindi popular films do this, I argue, by incorporating different familiar cultural narratives in a resourceful way, thus creating something new out of the old elements. The final word, however, is the one of the spectator. The “new” must come from within the culture. The Indian modernity must be imaginable and distinctively Indian. The social imagination is not a “Wild West” where new ideas enter the void and start living a life of their own. The way the young women in Dehra Dun interpreted family dramas and romantic movies highlights the importance of family and continuity in kinship arrangements. The institution of arranged marriage has changed its appearance and gained new alternative modes such as love cum arranged marriage. It nevertheless remains arranged by the parents. In my thesis I have offered a social description of a cultural reality in which movies act as a built-in part. Movies do not work as a distinct realm, but instead intertwine with the social realities of people as a part of a continuum. The social imagination is rooted in the everyday realities of people, as are the movies, in an ontological and categorical sense. According to my research, the links between imagination and social life were not so much what Arjun Appadurai would call global and deterritorialised, but instead local and conventional.