938 resultados para alta risoluzione Trentino Alto Adige data-set climatologia temperatura giornaliera orografia complessa


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The clusters of binary patterns can be considered as Boolean functions of the (binary) features. Such a relationship between the linearly separable (LS) Boolean functions and LS clusters of binary patterns is examined. An algorithm is presented to answer the questions of the type: “Is the cluster formed by the subsets of the (binary) data set having certain features AND/NOT having certain other features, LS from the remaining set?” The algorithm uses the sequences of Numbered Binary Form (NBF) notation and some elementary (NPN) transformations of the binary data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Markov random fields (MRF) are popular in image processing applications to describe spatial dependencies between image units. Here, we take a look at the theory and the models of MRFs with an application to improve forest inventory estimates. Typically, autocorrelation between study units is a nuisance in statistical inference, but we take an advantage of the dependencies to smooth noisy measurements by borrowing information from the neighbouring units. We build a stochastic spatial model, which we estimate with a Markov chain Monte Carlo simulation method. The smooth values are validated against another data set increasing our confidence that the estimates are more accurate than the originals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The new paradigm of connectedness and empowerment brought by the interactivity feature of the Web 2.0 has been challenging the traditional centralized performance of mainstream media. The corporation has been able to survive the strong winds by transforming itself into a global multimedia business network embedded in the network society. By establishing networks, e.g. networks of production and distribution, the global multimedia business network has been able to sight potential solutions by opening the doors to innovation in a decentralized and flexible manner. Under this emerging context of re-organization, traditional practices like sourcing need to be re- explained and that is precisely what this thesis attempts to tackle. Based on ICT and on the network society, the study seeks to explain within the Finnish context the particular case of Helsingin Sanomat (HS) and its relations with the youth news agency, Youth Voice Editorial Board (NÄT). In that sense, the study can be regarded as an explanatory embedded single case study, where HS is the principal unit of analysis and NÄT its embedded unit of analysis. The thesis was able to reach explanations through interrelated steps. First, it determined the role of ICT in HS’s sourcing practices. Then it mapped an overview of the HS’s sourcing relations and provided a context in which NÄT was located. And finally, it established conceptualized institutional relational data between HS and NÄT for their posterior measurement through social network analysis. The data set was collected via qualitative interviews addressed to online and offline editors of HS as well as interviews addressed to NÄT’s personnel. The study concluded that ICT’s interactivity and User Generated Content (UGC) are not sourcing tools as such but mechanism used by HS for getting ideas that could turn into potential news stories. However, when it comes to visual communication, some exemptions were found. The lack of official sources amidst the immediacy leads HS to rely on ICT’s interaction and UGC. More than meets the eye, ICT’s input into the sourcing practice may be more noticeable if the interaction and UGC is well organized and coordinated into proper and innovative networks of alternative content collaboration. Currently, HS performs this sourcing practice via two projects that differ, precisely, by the mode they are coordinated. The first project found, Omakaupunki, is coordinated internally by Sanoma Group’s owned media houses HS, Vartti and Metro. The second project found is coordinated externally. The external alternative sourcing network, as it was labeled, consists of three actors, namely HS, NÄT (professionals in charge) and the youth. This network is a balanced and complete triad in which the actors connect themselves in relations of feedback, recognition, creativity and filtering. However, as innovation is approached very reluctantly, this content collaboration is a laboratory of experiments; a ‘COLLABORATORY’.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tiivistelmä ReferatAbstract Metabolomics is a rapidly growing research field that studies the response of biological systems to environmental factors, disease states and genetic modifications. It aims at measuring the complete set of endogenous metabolites, i.e. the metabolome, in a biological sample such as plasma or cells. Because metabolites are the intermediates and end products of biochemical reactions, metabolite compositions and metabolite levels in biological samples can provide a wealth of information on on-going processes in a living system. Due to the complexity of the metabolome, metabolomic analysis poses a challenge to analytical chemistry. Adequate sample preparation is critical to accurate and reproducible analysis, and the analytical techniques must have high resolution and sensitivity to allow detection of as many metabolites as possible. Furthermore, as the information contained in the metabolome is immense, the data set collected from metabolomic studies is very large. In order to extract the relevant information from such large data sets, efficient data processing and multivariate data analysis methods are needed. In the research presented in this thesis, metabolomics was used to study mechanisms of polymeric gene delivery to retinal pigment epithelial (RPE) cells. The aim of the study was to detect differences in metabolomic fingerprints between transfected cells and non-transfected controls, and thereafter to identify metabolites responsible for the discrimination. The plasmid pCMV-β was introduced into RPE cells using the vector polyethyleneimine (PEI). The samples were analyzed using high performance liquid chromatography (HPLC) and ultra performance liquid chromatography (UPLC) coupled to a triple quadrupole (QqQ) mass spectrometer (MS). The software MZmine was used for raw data processing and principal component analysis (PCA) was used in statistical data analysis. The results revealed differences in metabolomic fingerprints between transfected cells and non-transfected controls. However, reliable fingerprinting data could not be obtained because of low analysis repeatability. Therefore, no attempts were made to identify metabolites responsible for discrimination between sample groups. Repeatability and accuracy of analyses can be influenced by protocol optimization. However, in this study, optimization of analytical methods was hindered by the very small number of samples available for analysis. In conclusion, this study demonstrates that obtaining reliable fingerprinting data is technically demanding, and the protocols need to be thoroughly optimized in order to approach the goals of gaining information on mechanisms of gene delivery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

QSPR-malli kuvaa kvantitatiivista riippuvuutta muuttujien ja biologisen ominaisuuden välillä. Näin ollen QSPR mallit ovat käyttökelpoisia lääkekehityksen apuvälineitä. Kirjallisessa osassa kerrotaan sarveiskalvon, suoliston ja veriaivoesteen permeabiliteetin malleista. Useimmin käytettyjä muuttujia ovat yhdisteen rasvaliukoisuus, polaarinen pinta-ala, vetysidosten muodostuminen ja varaus. Myös yhdisteen koko vaikuttaa läpäisevyyteen, vaikka tutkimuksissa onkin erilaista tietoa tämän merkittävyydestä. Malliin vaikuttaa myös muiden kuin mallissa mukana olevien muuttujien suuruusluokka esimerkkinä Lipinskin ‖rule of 5‖ luokittelu. Tässä luokittelussa yhdisteen ominaisuus ei saa ylittää tiettyjä raja-arvoja. Muussa tapauksessa sen imeytyminen suun kautta otettuna todennäköisesti vaarantuu. Lisäksi kirjallisessa osassa tutustuttiin kuljetinproteiineihin ja niiden toimintaan silmän sarveiskalvossa, suolistossa ja veriaivoesteessä. Nykyisin on kehitetty erilaisia QSAR-malleja kuljetinproteiineille ennustamaan mahdollisten substraatittien tai inhibiittorien vuorovaikutuksia kuljetinproteiinin kanssa. Kokeellisen osan tarkoitus oli rakentaa in silico -malli sarveiskalvon passiiviselle permeabiliteetille. Työssä tehtiin QSPR-malli 54 yhdisteen ACDLabs-ohjelmalla laskettujen muuttujien arvojen avulla. Permeabiliteettikertoimien arvot saatiin kirjallisuudesta kanin sarveiskalvon läpäisevyystutkimuksista. Lopullisen mallin muuttujina käytettiin oktanoli-vesijakaantumiskerrointa (logD) pH:ssa 7,4 ja vetysidosatomien kokonaismäärää. Yhtälö oli muotoa log10(permeabiliteettikerroin) = -3,96791 - 0,177842Htotal + 0,311963logD(pH7,4). R2-korrelaatiokerroin oli 0,77 ja Q2-korrelaatiokerroin oli 0,75. Lopullisen mallin hyvyyttä arvioitiin 15 yhdisteen ulkoisella testijoukolla, jolloin ennustettua permeabiliteettia verrattiin kokeelliseen permeabiliteettiin. QSPR-malli arvioitiin myös farmakokineettisen simulaation avulla. Simulaatiossa laskettiin seitsemän yhdisteen kammionestepitoisuudet in vivo vakaassa tilassa käyttäen simulaatioissa QSPR mallilla ennustettuja permeabiliteettikertoimia. Lisäksi laskettiin sarveiskalvon imeytymisen nopeusvakio (Kc) 13 yhdisteelle farmakokineettisen simulaation avulla ja verrattiin tätä lopullisella mallilla ennustettuun permeabiliteettiin. Tulosten perusteella saatiin tilastollisesti hyvä QSPR-malli kuvaamaan sarveiskalvon passiivista permeabiliteettia, jolloin tätä mallia voidaan käyttää lääkekehityksen alkuvaiheessa. QSPR-malli ennusti permeabiliteettikertoimet hyvin, mikä nähtiin vertaamalla mallilla ennustettuja arvoja kokeellisiin tuloksiin. Lisäksi yhdisteiden kammionestepitoisuudet voitiin simuloida käyttäen apuna QSPR-mallilla ennustettuja permeabiliteettikertoimien arvoja.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The relationship between site characteristics and understorey vegetation composition was analysed with quantitative methods, especially from the viewpoint of site quality estimation. Theoretical models were applied to an empirical data set collected from the upland forests of southern Finland comprising 104 sites dominated by Scots pine (Pinus sylvestris L.), and 165 sites dominated by Norway spruce (Picea abies (L.) Karsten). Site index H100 was used as an independent measure of site quality. A new model for the estimation of site quality at sites with a known understorey vegetation composition was introduced. It is based on the application of Bayes' theorem to the density function of site quality within the study area combined with the species-specific presence-absence response curves. The resulting posterior probability density function may be used for calculating an estimate for the site variable. Using this method, a jackknife estimate of site index H100 was calculated separately for pine- and spruce-dominated sites. The results indicated that the cross-validation root mean squared error (RMSEcv) of the estimates improved from 2.98 m down to 2.34 m relative to the "null" model (standard deviation of the sample distribution) in pine-dominated forests. In spruce-dominated forests RMSEcv decreased from 3.94 m down to 3.16 m. In order to assess these results, four other estimation methods based on understorey vegetation composition were applied to the same data set. The results showed that none of the methods was clearly superior to the others. In pine-dominated forests, RMSEcv varied between 2.34 and 2.47 m, and the corresponding range for spruce-dominated forests was from 3.13 to 3.57 m.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The factors affecting the non-industrial, private forest landowners' (hereafter referred to using the acronym NIPF) strategic decisions in management planning are studied. A genetic algorithm is used to induce a set of rules predicting potential cut of the landowners' choices of preferred timber management strategies. The rules are based on variables describing the characteristics of the landowners and their forest holdings. The predictive ability of a genetic algorithm is compared to linear regression analysis using identical data sets. The data are cross-validated seven times applying both genetic algorithm and regression analyses in order to examine the data-sensitivity and robustness of the generated models. The optimal rule set derived from genetic algorithm analyses included the following variables: mean initial volume, landowner's positive price expectations for the next eight years, landowner being classified as farmer, and preference for the recreational use of forest property. When tested with previously unseen test data, the optimal rule set resulted in a relative root mean square error of 0.40. In the regression analyses, the optimal regression equation consisted of the following variables: mean initial volume, proportion of forestry income, intention to cut extensively in future, and positive price expectations for the next two years. The R2 of the optimal regression equation was 0.34 and the relative root mean square error obtained from the test data was 0.38. In both models, mean initial volume and positive stumpage price expectations were entered as significant predictors of potential cut of preferred timber management strategy. When tested with the complete data set of 201 observations, both the optimal rule set and the optimal regression model achieved the same level of accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Uveal melanoma (UM) is the second most common primary intraocular cancer worldwide. It is a relatively rare cancer, but still the second most common type of primary malignant melanoma in humans. UM is a slowly growing tumor, and gives rise to distant metastasis mainly to the liver via the bloodstream. About 40% of patients with UM die of metastatic disease within 10 years of diagnosis, irrespective of the type of treatment. During the last decade, two main lines of research have aimed to achieve enhanced understanding of the metastasis process and accurate prognosis of patients with UM. One emphasizes the characteristics of tumor cells, particularly their nucleoli, and markers of proliferation, and the other the characteristics of tumor blood vessels. Of several morphometric measurements, the mean diameter of the ten largest nucleoli (MLN) has become the most widely applied. A large MLN has consistently been associated with high likelihood of dying from UM. Blood vessels are of paramount importance in metastasis of UM. Different extravascular matrix patterns can be seen in UM, like loops and networks. This presence is associated with death from metastatic melanoma. However, the density of microvessels is also of prognostic importance. This study was undertaken to help understanding some histopathological factors which might contribute to developing metastasis in UM patients. Factors which could be related to tumor progression to metastasis disease, namely nucleolar size, MLN, microvascular density (MVD), cell proliferation, and The Insulin-like Growth Factor 1 Receptor(IGF-1R), were investigated. The primary aim of this thesis was to study the relationship between prognostic factors such as tumor cell nucleolar size, proliferation, extravascular matrix patterns, and dissemination of UM, and to assess to what extent there is a relationship to metastasis. The secondary goal was to develop a multivariate model which includes MLN and cell proliferation in addition to MVD, and which would fit better with population-based, melanoma-related survival data than previous models. I studied 167 patients with UM, who developed metastasis even after a very long time following removal of the eye, metastatic disease was the main cause of death, as documented in the Finnish Cancer Registry and on death certificates. Using an independent population-based data set, it was confirmed that MLN and extravascular matrix loops and networks were unrelated, independent predictors of survival in UM. Also, it has been found that multivariate models including MVD in addition to MLN fitted significantly better with survival data than models which excluded MVD. This supports the idea that both the characteristics of the blood vessels and the cells are important, and the future direction would be to look for the gene expression profile, whether it is associated more with MVD or MLN. The former relates to the host response to the tumor and may not be as tightly associated with the gene expression profile, yet most likely involved in the process of hematogenous metastasis. Because fresh tumor material is needed for reliable genetic analysis, such analysis could not be performed Although noninvasive detection of certain extravascular matrix patterns is now technically possible,in managing patients with UM, this study and tumor genetics suggest that such noninvasive methods will not fully capture the process of clinical metastasis. Progress in resection and biopsy techniques is likely in the near future to result in fresh material for the ophthalmic pathologist to correlate angiographic data, histopathological characteristics such as MLN, and genetic data. This study supported the theory that tumors containing epithelioid cells grow faster and have poorer prognosis when studied by cell proliferation in UM based on Ki-67 immunoreactivity. Cell proliferation index fitted best with the survival data when combined with MVD, MLN, and presence of epithelioid cells. Analogous with the finding that high MVD in primary UM is associated with shorter time to metastasis than low MVD, high MVD in hepatic metastasis tends to be associated with shorter survival after diagnosis of metastasis. Because the liver is the main organ for metastasis from UM, growth factors largely produced in the liver hepatocyte growth factor, epidermal growth factor and insulin-like growth factor-1 (IGF-1) together with their receptors may have a role in the homing and survival of metastatic cells. Therefore the association between immunoreactivity for IGF-1R in primary UM and metastatic death was studied. It was found that immunoreactivity for IGF-IR did not independently predict metastasis from primary UM in my series.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Salmonella typhimurium, propionate is oxidized to pyruvate via the 2-methylcitric acid cycle. The last step of this cycle, the cleavage of 2-methylisocitrate to succinate and pyruvate, is catalysed by 2-methylisocitrate lyase (EC 4.1.3.30). Methylisocitrate lyase (molecular weight 32 kDa) with a C-terminal polyhistidine affinity tag has been cloned and overexpressed in Escherichia coli and purified and crystallized under different conditions using the hanging-drop vapour-diffusion technique. Crystals belong to the orthogonal space group P2(1)2(1)2(1), with unit-cell parameters a = 63.600, b = 100.670, c = 204.745 Angstrom. A complete data set to 2.5 Angstrom resolution has been collected using an image-plate detector system mounted on a rotating-anode X-ray generator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study examines the diaconia work of the Finnish Evangelical Lutheran Church from the standpoint of clients. The role of diaconia work has grown since the early 1990s recession, and since it established itself as one of the actors along with other social organizations. Previous studies have described the changing role of diaconal work, especially from the standpoint of diaconia workers and co-operators. This research goes back to examine, beyond the activities of the diaconia work of everyday practices, its relations of ruling which are determining practices. The theoretical and methodological framework rises from the thinking of Dorothy E. Smith, the creator of institutional ethnography. Its origins are in feminism, Marxism, phenomenology, etnomethodology, and symbolic interactionism. However, it does not represent any school. Unlike the objectivity-based traditional sociology, institutional ethnography has its starting point in everyday life, and people s subjective experience of it. Everyday life is just a starting point, and is used to examine everyday life s experiences of hidden relations of ruling, linking people and organizations. The level of generalization is just on the relations of ruling. The research task is to examine those meanings of diaconia work which are embedded in its clients experiences. The research task is investigated with two questions: how diaconia work among its clients takes shape and what kinds of relations of ruling exist in diaconia work. The meanings of diaconia work come through an examination of the relations of ruling, which create new forms of diaconal work compared with previous studies. For the study, two kinds of data were collected: a questionnaire and ethnographic fieldwork. The first data set was collected from diaconal workers using the questionnaire. It gives background information of the diaconia work process from the standpoint of the clients. In the ethnographic study there were two phases. The first ethnographic material was collected from one local parish by observing, interviewing clients and diaconal workers and gathering documents. The number of observations was 36 customer appointments, and 29 interviews. The second ethnographic material was included as a part of the analysis, in which ruling relations in people s experiences were collected from the transcribed data. Close reading and narrative analysis are used as analysing methods. The analysis has three phases. First, the experiences are identified with close reading; the following step is to select some of the institutional processes that are shaping those experiences and are relevant for the research. At the third stage, those processes are investigated in order to describe analytically how they determine people s experience. The analysis produces another narrative about diaconia work, which provides tools for examining the diaconal work from a new perspective. Through the analysis it is possible to see diaconia as an exchange ratio, in which the exchange takes place between a client and a diaconia worker, but also more broadly with other actors, such as social workers, shop clerks, or with other parishioners. The exchange ratio is examined from the perspective of power which is embedded in the client s experiences. The analysis reveals that the most important relations of ruling are humiliation and randomness in the exchange ratio of diaconia work; valuating spirituality above the bodily being; and replacing official social work. The results give a map about the relations of ruling of diaconia work which gives tools to look at diaconia work s meanings to the clients. The hidden element of humiliation in the exchange ratio breaks the current picture of diaconia work. The ethos of the holistic encounters and empathic practices are shown to be of another kind when spirituality is preferred to the bodily being. Nevertheless, diaconia appears to be a place for a respectful encounter, especially in situations where the public sector s actors are retreating on liability or clients are in a life crisis. The collapse of the welfare state structures imposes on diaconia work tasks that have not previously belonged to it. At the local level, clients receive partners from diaconia workers in order to advocate them in the welfare system. Actions to influence the wider societal structures are not reached because of lacking resources. An awareness of the oppressive practices of diaconia work and their critical reviewing are the keys to the development of diaconia work, since there are such practices even in holistic and respectful diaconia work. While the research raises new information for the development of diaconia work, it also opens up new aspects for developing other kinds of social work by emphasizing the importance of taking people s experiences seriously. Keywords: diaconia work, institutional ethnography, Dorothy E. Smith, experience, customer, relations of ruling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The importance and usefulness of local doublet parameters in understanding sequence dependent effects has been described for A- and B-DNA oligonucleotide crystal structures. Each of the two sets of local parameters described by us in the NUPARM algorithm, namely the local doublet parameters, calculated with reference to the mean z-axis, and the local helical parameters, calculated with reference to the local helix axis, is sufficient to describe the oligonucleotide structures, with the local helical parameters giving a slightly magnified picture of the variations in the structures. The values of local doublet parameters calculated by NUPARM algorithm are similar to those calculated by NEWHELIX90 program, only if the oligonucleotide fragment is not too distorted. The mean values obtained using all the available data for B-DNA crystals are not significantly different from those obtained when a limited data set is used, consisting only of structures with a data resolution of better than 2.4 A and without any bound drug molecule. Thus the variation observed in the oligonucleotide crystals appears to be independent of the quality of their crystallinity. No strong correlation is seen between any pair of local doublet parameters but the local helical parameters are interrelated by geometric relationships. An interesting feature that emerges from this analysis is that the local rise along the z-axis is highly correlated with the difference in the buckle values of the two basepairs in the doublet, as suggested earlier for the dodecamer structures (Bansal and Bhattacharyya, in Structure & Methods: DNA & RNA, Vol. 3 (Eds., R.H. Sarma and M.H. Sarma), pp. 139-153 (1990)). In fact the local rise values become almost constant for both A- and B-forms, if a correction is applied for the buckling of the basepairs. In B-DNA the AA, AT, TA and GA basepair sequences generally have a smaller local rise (3.25 A) compared to the other sequences (3.4 A) and this seems to be an intrinsic feature of basepair stacking interaction and not related to any other local doublet parameter. The roll angles in B-DNA oligonucleotides have small values (less than +/- 8 degrees), while mean local twist varies from 24 degrees to 45 degrees. The CA/TG doublet sequences show two types of preferred geometries, one with positive roll, small positive slide and reduced twist and another with negative roll, large positive slide and increased twist.(ABSTRACT TRUNCATED AT 400 WORDS)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The K-means algorithm for clustering is very much dependent on the initial seed values. We use a genetic algorithm to find a near-optimal partitioning of the given data set by selecting proper initial seed values in the K-means algorithm. Results obtained are very encouraging and in most of the cases, on data sets having well separated clusters, the proposed scheme reached a global minimum.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have delineated rainfall zones for the Indian region that are coherent with respect to the variations of the summer monsoon rainfall. Within each zone, the time series of the summer monsoon rainfall at every pair of stations are significantly positively correlated, and the mean interseries correlation for each zone is high. The interseries correlation data set is analysed in order to delineate the rainfall zones, using an objective method specifically developed for the purpose. Each of the zonal averages are shown to be representative of the zone as a whole. We suggest that this regionalization is appropriate for study of the variation of the summer monsoon rainfall over the Indian region on interannual and larger scales.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cross-strand disulfides bridge two cysteines in a registered pair of antiparallel beta-strands. A nonredundant data set comprising 5025 polypeptides containing 2311 disulfides was used to study cross-strand disulfides. Seventy-six cross-strand disulfides were found of which 75 and 1 occurred at non-hydrogen-bonded (NHB) and hydrogen-bonded (HB) registered pairs, respectively. Conformational analysis and modeling studies demonstrated that disulfide formation at HB pairs necessarily requires an extremely rare and positive chi(1) value for at least one of the cysteine residues. Disulfides at HB positions also have more unfavorable steric repulsion with the main chain. Thirteen pairs of disulfides were introduced in NHB and HB pairs in four model proteins: leucine binding protein (LBP), leucine, isoleucine, valine binding protein (LIVBP), maltose binding protein (MBP), and Top7. All mutants LIVBP T247C V331C showed disulfide formation either on purification, or on treatment with oxidants. Protein stability in both oxidized and reduced states of all mutants was measured. Relative to wild type, LBP and MBP mutants were destabilized with respect to chemical denaturation, although the sole exposed NHB LBP mutant showed an increase of 3.1 degrees C in T-m. All Top7 mutants were characterized for stability through guanidinium thiocyanate chemical denaturation. Both exposed and two of the three buried NHB mutants were appreciably stabilized. All four HB Top7 mutants were destabilized (Delta Delta G(0) = -3.3 to -6.7 kcal/mol). The data demonstrate that introduction of cross-strand disulfides at exposed NHB pairs is a robust method of improving protein stability. All four exposed Top7 disulfide mutants showed mild redox activity. Proteins 2011; 79: 244-260. (C) 2010 Wiley-Liss, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Our study concerns an important current problem, that of diffusion of information in social networks. This problem has received significant attention from the Internet research community in the recent times, driven by many potential applications such as viral marketing and sales promotions. In this paper, we focus on the target set selection problem, which involves discovering a small subset of influential players in a given social network, to perform a certain task of information diffusion. The target set selection problem manifests in two forms: 1) top-k nodes problem and 2) lambda-coverage problem. In the top-k nodes problem, we are required to find a set of k key nodes that would maximize the number of nodes being influenced in the network. The lambda-coverage problem is concerned with finding a set of k key nodes having minimal size that can influence a given percentage lambda of the nodes in the entire network. We propose a new way of solving these problems using the concept of Shapley value which is a well known solution concept in cooperative game theory. Our approach leads to algorithms which we call the ShaPley value-based Influential Nodes (SPINs) algorithms for solving the top-k nodes problem and the lambda-coverage problem. We compare the performance of the proposed SPIN algorithms with well known algorithms in the literature. Through extensive experimentation on four synthetically generated random graphs and six real-world data sets (Celegans, Jazz, NIPS coauthorship data set, Netscience data set, High-Energy Physics data set, and Political Books data set), we show that the proposed SPIN approach is more powerful and computationally efficient. Note to Practitioners-In recent times, social networks have received a high level of attention due to their proven ability in improving the performance of web search, recommendations in collaborative filtering systems, spreading a technology in the market using viral marketing techniques, etc. It is well known that the interpersonal relationships (or ties or links) between individuals cause change or improvement in the social system because the decisions made by individuals are influenced heavily by the behavior of their neighbors. An interesting and key problem in social networks is to discover the most influential nodes in the social network which can influence other nodes in the social network in a strong and deep way. This problem is called the target set selection problem and has two variants: 1) the top-k nodes problem, where we are required to identify a set of k influential nodes that maximize the number of nodes being influenced in the network and 2) the lambda-coverage problem which involves finding a set of influential nodes having minimum size that can influence a given percentage lambda of the nodes in the entire network. There are many existing algorithms in the literature for solving these problems. In this paper, we propose a new algorithm which is based on a novel interpretation of information diffusion in a social network as a cooperative game. Using this analogy, we develop an algorithm based on the Shapley value of the underlying cooperative game. The proposed algorithm outperforms the existing algorithms in terms of generality or computational complexity or both. Our results are validated through extensive experimentation on both synthetically generated and real-world data sets.