133 resultados para WE 344
Resumo:
I argue that the initial set of firm-specific assets (FSAs) act as an envelope for the early stages of internationalization of multinational enterprises (MNEs) (of whatever nationality) AND THAT there is a threshold LEVEL of FSAs that IT must possess for such international expansion to be SUCCESSFUL. I also argue that the initial FSAs of an MNE tend to be constrained by the location-specific (L) assets of the home country. However, beyond different initial conditions, there are few obvious reasons to insist that INFANT developing country MNEs are of unique character THAN ADVANCED ECONOMY MNEs, and I predict that as they evolve, the observable differences between the two groups will diminish. Successful firms will increasingly explore internationalization, but there is also no reason to believe that this is likely to happen disproportionately from the developing countries.
Resumo:
Studies of face recognition and discrimination provide a rich source of data and debate on the nature of their processing, in particular through using inverted faces. This study draws parallels between the features of typefaces and faces, as letters share a basic configuration, regardless of typeface, that could be seen as similar to faces. Typeface discrimination is compared using paragraphs of upright letters and inverted letters at three viewing durations. Based on previously reported effects of expertise, the prediction that designers would be less accurate when letters are inverted, whereas nondesigners would have similar performance in both orientations, was confirmed. A proposal is made as to which spatial relations between typeface components constitute holistic and configural processing, posited as the basis for better discrimination of the typefaces of upright letters. Such processing may characterize designers’ perceptual abilities, acquired through training.
Resumo:
In the heart, inflammatory cytokines including interleukin (IL) 1β are implicated in regulating adaptive and maladaptive changes, whereas IL33 negatively regulates cardiomyocyte hypertrophy and promotes cardioprotection. These agonists signal through a common co-receptor but, in cardiomyocytes, IL1β more potently activates mitogen-activated protein kinases and NFκB, pathways that regulate gene expression. We compared the effects of external application of IL1β and IL33 on the cardiomyocyte transcriptome. Neonatal rat cardiomyocytes were exposed to IL1β or IL33 (0.5, 1 or 2h). Transcriptomic profiles were determined using Affymetrix rat genome 230 2.0 microarrays and data were validated by quantitative PCR. IL1β induced significant changes in more RNAs than IL33 and, generally, to a greater degree. It also had a significantly greater effect in downregulating mRNAs and in regulating mRNAs associated with selected pathways. IL33 had a greater effect on a small, select group of specific transcripts. Thus, differences in intensity of intracellular signals can deliver qualitatively different responses. Quantitatively different responses in production of receptor agonists and transcription factors may contribute to qualitative differences at later times resulting in different phenotypic cellular responses.
Resumo:
This paper describes the implementation of a semantic web search engine on conversation styled transcripts. Our choice of data is Hansard, a publicly available conversation style transcript of parliamentary debates. The current search engine implementation on Hansard is limited to running search queries based on keywords or phrases hence lacks the ability to make semantic inferences from user queries. By making use of knowledge such as the relationship between members of parliament, constituencies, terms of office, as well as topics of debates the search results can be improved in terms of both relevance and coverage. Our contribution is not algorithmic instead we describe how we exploit a collection of external data sources, ontologies, semantic web vocabularies and named entity extraction in the analysis of underlying semantics of user queries as well as the semantic enrichment of the search index thereby improving the quality of results.
Resumo:
One of the most problematic aspects of the ‘Harvard School’ of liberal international theory is its failure to fulfil its own methodological ideals. Although Harvard School liberals subscribe to a nomothetic model of explanation, in practice they employ their theories as heuristic resources. Given this practice, we should expect them neither to develop candidate causal generalizations nor to be value-neutral: their explanatory insights are underpinned by value-laden choices about which questions to address and what concepts to employ. A key question for liberal theorists, therefore, is how a theory may be simultaneously explanatory and value-oriented. The difficulties inherent in resolving this problem are manifested in Ikenberry’s writing: whilst his work on constitutionalism in international politics partially fulfils the requirements of a more satisfactory liberal explanatory theory, his recent attempts to develop prescriptions for US foreign policy reproduce, in a new form, key failings of Harvard School realism.
Resumo:
Background: Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results: We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2 of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log(2) units (6 of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions: This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells.
Resumo:
Background: Affymetrix GeneChip arrays are widely used for transcriptomic studies in a diverse range of species. Each gene is represented on a GeneChip array by a probe- set, consisting of up to 16 probe-pairs. Signal intensities across probe- pairs within a probe-set vary in part due to different physical hybridisation characteristics of individual probes with their target labelled transcripts. We have previously developed a technique to study the transcriptomes of heterologous species based on hybridising genomic DNA (gDNA) to a GeneChip array designed for a different species, and subsequently using only those probes with good homology. Results: Here we have investigated the effects of hybridising homologous species gDNA to study the transcriptomes of species for which the arrays have been designed. Genomic DNA from Arabidopsis thaliana and rice (Oryza sativa) were hybridised to the Affymetrix Arabidopsis ATH1 and Rice Genome GeneChip arrays respectively. Probe selection based on gDNA hybridisation intensity increased the number of genes identified as significantly differentially expressed in two published studies of Arabidopsis development, and optimised the analysis of technical replicates obtained from pooled samples of RNA from rice. Conclusion: This mixed physical and bioinformatics approach can be used to optimise estimates of gene expression when using GeneChip arrays.
Resumo:
The recent literature proposes many variables as significant determinants of pollution. This paper gives an overview of this literature and asks which of these factors have an empirically robust impact on water and air pollution. We apply Extreme Bound Analysis (EBA) on a panel of up to 120 countries covering the period 1960–2001. We find supportive evidence of the existence of the environmental Kuznets curve for water pollution. Furthermore, mainly variables capturing the economic structure of a country affect air and water pollution.
Resumo:
We analyse by simulation the impact of model-selection strategies (sometimes called pre-testing) on forecast performance in both constant-and non-constant-parameter processes. Restricted, unrestricted and selected models are compared when either of the first two might generate the data. We find little evidence that strategies such as general-to-specific induce significant over-fitting, or thereby cause forecast-failure rejection rates to greatly exceed nominal sizes. Parameter non-constancies put a premium on correct specification, but in general, model-selection effects appear to be relatively small, and progressive research is able to detect the mis-specifications.
Resumo:
Analyses of simulations of the last glacial maximum (LGM) made with 17 atmospheric general circulation models (AGCMs) participating in the Paleoclimate Modelling Intercomparison Project, and a high-resolution (T106) version of one of the models (CCSR1), show that changes in the elevation of tropical snowlines (as estimated by the depression of the maximum altitude of the 0 °C isotherm) are primarily controlled by changes in sea-surface temperatures (SSTs). The correlation between the two variables, averaged for the tropics as a whole, is 95%, and remains >80% even at a regional scale. The reduction of tropical SSTs at the LGM results in a drier atmosphere and hence steeper lapse rates. Changes in atmospheric circulation patterns, particularly the weakening of the Asian monsoon system and related atmospheric humidity changes, amplify the reduction in snowline elevation in the northern tropics. Colder conditions over the tropical oceans combined with a weakened Asian monsoon could produce snowline lowering of up to 1000 m in certain regions, comparable to the changes shown by observations. Nevertheless, such large changes are not typical of all regions of the tropics. Analysis of the higher resolution CCSR1 simulation shows that differences between the free atmospheric and along-slope lapse rate can be large, and may provide an additional factor to explain regional variations in observed snowline changes.
Resumo:
Global syntheses of palaeoenvironmental data are required to test climate models under conditions different from the present. Data sets for this purpose contain data from spatially extensive networks of sites. The data are either directly comparable to model output or readily interpretable in terms of modelled climate variables. Data sets must contain sufficient documentation to distinguish between raw (primary) and interpreted (secondary, tertiary) data, to evaluate the assumptions involved in interpretation of the data, to exercise quality control, and to select data appropriate for specific goals. Four data bases for the Late Quaternary, documenting changes in lake levels since 30 kyr BP (the Global Lake Status Data Base), vegetation distribution at 18 kyr and 6 kyr BP (BIOME 6000), aeolian accumulation rates during the last glacial-interglacial cycle (DIRTMAP), and tropical terrestrial climates at the Last Glacial Maximum (the LGM Tropical Terrestrial Data Synthesis) are summarised. Each has been used to evaluate simulations of Last Glacial Maximum (LGM: 21 calendar kyr BP) and/or mid-Holocene (6 cal. kyr BP) environments. Comparisons have demonstrated that changes in radiative forcing and orography due to orbital and ice-sheet variations explain the first-order, broad-scale (in space and time) features of global climate change since the LGM. However, atmospheric models forced by 6 cal. kyr BP orbital changes with unchanged surface conditions fail to capture quantitative aspects of the observed climate, including the greatly increased magnitude and northward shift of the African monsoon during the early to mid-Holocene. Similarly, comparisons with palaeoenvironmental datasets show that atmospheric models have underestimated the magnitude of cooling and drying of much of the land surface at the LGM. The inclusion of feedbacks due to changes in ocean- and land-surface conditions at both times, and atmospheric dust loading at the LGM, appears to be required in order to produce a better simulation of these past climates. The development of Earth system models incorporating the dynamic interactions among ocean, atmosphere, and vegetation is therefore mandated by Quaternary science results as well as climatological principles. For greatest scientific benefit, this development must be paralleled by continued advances in palaeodata analysis and synthesis, which in turn will help to define questions that call for new focused data collection efforts.