72 resultados para sources of guidance


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper explores the effects of two main sources of innovation —intramural and external R&D— on the productivity level in a sample of 3,267 Catalan firms. The data set used is based on the official innovation survey of Catalonia which was a part of the Spanish sample of CIS4, covering the years 2002-2004. We compare empirical results by applying usual OLS and quantile regression techniques both in manufacturing and services industries. In quantile regression, results suggest different patterns at both innovation sources as we move across conditional quantiles. The elasticity of intramural R&D activities on productivity decreased when we move up the high productivity levels both in manufacturing and services sectors, while the effects of external R&D rise in high-technology industries but are more ambiguous in low-technology and services industries.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper utilizes a panel data sample selection model to correct the selection in the analysis of longitudinal labor market data for married women in European countries. We estimate the female wage equation in a framework of unbalanced panel data models with sample selection. The wage equations of females have several potential sources of.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper explores the effects of two main sources of innovation - intramural and external R&D— on the productivity level in a sample of 3,267 Catalonian firms. The data set used is based on the official innovation survey of Catalonia which was a part of the Spanish sample of CIS4, covering the years 2002-2004. We compare empirical results by applying usual OLS and quantile regression techniques both in manufacturing and services industries. In quantile regression, results suggest different patterns at both innovation sources as we move across conditional quantiles. The elasticity of intramural R&D activities on productivity decreased when we move up the high productivity levels both in manufacturing and services sectors, while the effects of external R&D rise in high-technology industries but are more ambiguous in low-technology and knowledge-intensive services. JEL codes: O300, C100, O140 Keywords: Innovation sources, R&D, Productivity, Quantile Regression

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The autonomous regulatory agency has recently become the ‘appropriate model’ of governance across countries and sectors. The dynamics of this process is captured in our data set, which covers the creation of agencies in 48 countries and 16 sectors since the 1920s. Adopting a diffusion approach to explain this broad process of institutional change, we explore the role of countries and sectors as sources of institutional transfer at different stages of the diffusion process. We demonstrate how the restructuring of national bureaucracies unfolds via four different channels of institutional transfer. Our results challenge theoretical approaches that overemphasize the national dimension in global diffusion and are insensitive to the stages of the diffusion process. Further advance in study of diffusion depends, we assert, on the ability to apply both cross-sectoral and cross-national analysis to the same research design and to incorporate channels of transfer with different causal mechanisms for different stages of the diffusion process.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper provides evidence on the sources of differences in inequalities in educational scores in European Union member states, by decomposing them into their determining factors. Using PISA data from the 2000 and 2006 waves, the paper shows that inequalities emerge in all countries and in both period, but decreased in Germany, whilst they increased in France and Italy. Decomposition shows that educational inequalities do not only reflect background related inequality, but especially schools’ characteristics. The findings allow policy makers to target areas that may make a contribution in reducing educational inequalities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El objetivo de esta investigación es aportar evidencia sobre las fuentes de las economías de aglomeración para el caso español. De todas las maneras posibles que se han tomado en la literatura para medir las economías de aglomeración, nosotros lo analizamos a partir de las decisiones de localización de las empresas manufactureras. La literatura reciente ha puesto de relieve que el análisis basado en la disyuntiva localización / urbanización (relaciones dentro de un mismo sector) no es suficiente para entender las economías de aglomeración. Sin embargo, las relaciones entre los diferentes sectores sí resultan significativas al examinar por qué las empresas que pertenecen a diferentes sectores se localizan unas al lado de las otras. Con esto en mente, intentamos explicar que relaciones entre diferentes sectores pueden explicar coaglomeración. Para ello, nos centramos en aquellas relaciones entre sectores definidos a partir de los mecanismos de aglomeración de Marshall, es decir, labor market, input sharing y knowledge spillovers. Trabajamos con el labor market pooling en la medida en que los dos sectores utilizan los mismos trabajadores (clasificación de ocupaciones). Con el segundo mecanismo de Marshall, input sharing, introducimos cómo dos sectores tienen una relación de comprador / vendedor. Por último, nos referimos a dos sectores que utilizan las mismas tecnologías en cuanto a los knowledge spillovers. Con el fin de capturar todos los efectos de los mecanismos de aglomeracion en España, en esta investigación trabajamos con dos ámbitos geográficos, los municipios y los mercados de trabajo locales. La literatura existente nunca se ha puesto de acuerdo en cual es el ámbito geográfico en el que mejor trabajan los mecanismos Marshall, por lo que hemos cubierto todas las unidades geográficas potenciales.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper explores how absorptive capacity affects the innovative performance and productivity dynamics of Spanish firms. A firm’s efficiency levels are measured using two variables: the labour productivity and the Total Factor Productivity (TFP). The theoretical framework is based on the seminal contributions of Cohen and Levinthal (1989, 1990) regarding absorptive capacity; and the applied framework is based on the four-stage structural model proposed by Crépon, Duguet and Mairesse (1998) for setting the determinants of R&D, the effects of R&D activities on innovation outputs, and the impacts of innovation on firm productivity. The present study uses a twostage structural model. In the first stage, a probit estimation is used to investigate how the sources of R&D, the absorptive capacity and a vector of the firm’s individual features influence the firm’s likelihood of developing innovations in products or processes. In the second phase, a quantile regression is used to analyze the effect of R&D sources, absorptive capacity and firm characteristics on productivity. This method shows the elasticity of each exogenous variable on productivity according to the firms’ levels of efficiency, and thus allows us to distinguish between firms that are close to the technological frontier and those that are further away from it. We used extensive firm-level panel data from 5,575 firms for the 2004-2009 period. The results show that the internal absorptive capacity has a strong impact on the productivity of firms, whereas the role of external absorptive capacity differs according to nature of the each industry and according the distance of firms from the technological frontier. Key words: R&D sources, innovation strategies, absorptive capacity, technological distance, quantile regression.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In an earlier investigation (Burger et al., 2000) five sediment cores near the RodriguesTriple Junction in the Indian Ocean were studied applying classical statistical methods(fuzzy c-means clustering, linear mixing model, principal component analysis) for theextraction of endmembers and evaluating the spatial and temporal variation ofgeochemical signals. Three main factors of sedimentation were expected by the marinegeologists: a volcano-genetic, a hydro-hydrothermal and an ultra-basic factor. Thedisplay of fuzzy membership values and/or factor scores versus depth providedconsistent results for two factors only; the ultra-basic component could not beidentified. The reason for this may be that only traditional statistical methods wereapplied, i.e. the untransformed components were used and the cosine-theta coefficient assimilarity measure.During the last decade considerable progress in compositional data analysis was madeand many case studies were published using new tools for exploratory analysis of thesedata. Therefore it makes sense to check if the application of suitable data transformations,reduction of the D-part simplex to two or three factors and visualinterpretation of the factor scores would lead to a revision of earlier results and toanswers to open questions . In this paper we follow the lines of a paper of R. Tolosana-Delgado et al. (2005) starting with a problem-oriented interpretation of the biplotscattergram, extracting compositional factors, ilr-transformation of the components andvisualization of the factor scores in a spatial context: The compositional factors will beplotted versus depth (time) of the core samples in order to facilitate the identification ofthe expected sources of the sedimentary process.Kew words: compositional data analysis, biplot, deep sea sediments

Relevância:

90.00% 90.00%

Publicador:

Resumo:

At CoDaWork'03 we presented work on the analysis of archaeological glass composi-tional data. Such data typically consist of geochemical compositions involving 10-12variables and approximates completely compositional data if the main component, sil-ica, is included. We suggested that what has been termed `crude' principal componentanalysis (PCA) of standardized data often identi ed interpretable pattern in the datamore readily than analyses based on log-ratio transformed data (LRA). The funda-mental problem is that, in LRA, minor oxides with high relative variation, that maynot be structure carrying, can dominate an analysis and obscure pattern associatedwith variables present at higher absolute levels. We investigate this further using sub-compositional data relating to archaeological glasses found on Israeli sites. A simplemodel for glass-making is that it is based on a `recipe' consisting of two `ingredients',sand and a source of soda. Our analysis focuses on the sub-composition of componentsassociated with the sand source. A `crude' PCA of standardized data shows two clearcompositional groups that can be interpreted in terms of di erent recipes being used atdi erent periods, reected in absolute di erences in the composition. LRA analysis canbe undertaken either by normalizing the data or de ning a `residual'. In either case,after some `tuning', these groups are recovered. The results from the normalized LRAare di erently interpreted as showing that the source of sand used to make the glassdi ered. These results are complementary. One relates to the recipe used. The otherrelates to the composition (and presumed sources) of one of the ingredients. It seemsto be axiomatic in some expositions of LRA that statistical analysis of compositionaldata should focus on relative variation via the use of ratios. Our analysis suggests thatabsolute di erences can also be informative

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The precise relationship between the positive psychological state of work (i.e. engagement ) and the negative psychological state (i.e. burnout) has recently received research attention. Some view these as opposite states on the same similar continuum, while others take the position that they represent different biobehavioral spheres. This study expands our knowledge of the phenomenta engagement and burnout by analyzing their separate and joint manifestations. Using a sample of 2094 nurses, respondents were analyzed to determine the configuration of antecedents leading to separate and joint states of engagement and burnout, the configuration of engagement and burnout leading to mental, physical and organizational outcomes, and the relationship between engagement, bornout, and risk of metabolic syndrome. The study found that while both work engagement and burnout are highly correlated to health and organizational outcomes, the relative statistical power of burnout has a greater direct effect on health. It is important for workers and managers to adress the sources of burnout before addressing the positive psychological aspects of worker engagement.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Learning object economies are marketplaces for the sharing and reuse of learning objects (LO). There are many motivations for stimulating the development of the LO economy. The main reason is the possibility of providing the right content, at the right time, to the right learner according to adequate quality standards in the context of a lifelong learning process; in fact, this is also the main objective of education. However, some barriers to the development of a LO economy, such as the granularity and editability of LO, must be overcome. Furthermore, some enablers, such as learning design generation and standards usage, must be promoted in order to enhance LO economy. For this article, we introduced the integration of distributed learning object repositories (DLOR) as sources of LO that could be placed in adaptive learning designs to assist teachers’ design work. Two main issues presented as a result: how to access distributed LO, and where to place the LO in the learning design. To address these issues, we introduced two processes: LORSE, a distributed LO searching process, and LOOK, a micro context-based positioning process, respectively. Using these processes, the teachers were able to reuse LO from different sources to semi-automatically generate an adaptive learning design without leaving their virtual environment. A layered evaluation yielded good results for the process of placing learning objects from controlled learning object repositories into a learning design, and permitting educators to define different open issues that must be covered when they use uncontrolled learning object repositories for this purpose. We verified the satisfaction users had with our solution

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Coffee and cocoa represent the main sources of income for small farmers in the Northern Amazon Region of Ecuador. The provinces of Orellana and Sucumbios, as border areas, have benefited from investments made by many public and private institutions. Many of the projects carried out in the area have been aimed at energising the production of coffee and cocoa, strengthening the producers’ associations and providing commercialisation infrastructure. Improving the quality of life of this population threatened by poverty and high migration flows mainly from Colombia is a significant challenge. This paper presents research highlighting the importance of associative commercialisation to raising income from coffee and cocoa. The research draws on primary information obtained during field work, and from official information from the Ministry of Agriculture. The study presents an overview of current organisational structures, initiatives of associative commercialisation, stockpiling of infrastructure and ownership regimes, as well as estimates for both ‘robusta’ coffee and national cocoa production and income. The analysis of the main constraints presents different alternatives for the implementation of public land policies. These policies are aimed at mitigating the problems associated with the organisational structure of the producers, with processes of commercialisation and with environmental aspects, among others.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Choosing an adequate measurement instrument depends on the proposed use of the instrument, the concept to be measured, the measurement properties (e.g. internal consistency, reproducibility, content and construct validity, responsiveness, and interpretability), the requirements, the burden for subjects, and costs of the available instruments. As far as measurement properties are concerned, there are no sufficiently specific standards for the evaluation of measurement properties of instruments to measure health status, and also no explicit criteria for what constitutes good measurement properties. In this paper we describe the protocol for the COSMIN study, the objective of which is to develop a checklist that contains COnsensus-based Standards for the selection of health Measurement INstruments, including explicit criteria for satisfying these standards. We will focus on evaluative health related patient-reported outcomes (HR-PROs), i.e. patient-reported health measurement instruments used in a longitudinal design as an outcome measure, excluding health care related PROs, such as satisfaction with care or adherence. The COSMIN standards will be made available in the form of an easily applicable checklist.Method: An international Delphi study will be performed to reach consensus on which and how measurement properties should be assessed, and on criteria for good measurement properties. Two sources of input will be used for the Delphi study: (1) a systematic review of properties, standards and criteria of measurement properties found in systematic reviews of measurement instruments, and (2) an additional literature search of methodological articles presenting a comprehensive checklist of standards and criteria. The Delphi study will consist of four (written) Delphi rounds, with approximately 30 expert panel members with different backgrounds in clinical medicine, biostatistics, psychology, and epidemiology. The final checklist will subsequently be field-tested by assessing the inter-rater reproducibility of the checklist.Discussion: Since the study will mainly be anonymous, problems that are commonly encountered in face-to-face group meetings, such as the dominance of certain persons in the communication process, will be avoided. By performing a Delphi study and involving many experts, the likelihood that the checklist will have sufficient credibility to be accepted and implemented will increase.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Systematic approaches for identifying proteins involved in different types of cancer are needed. Experimental techniques such as microarrays are being used to characterize cancer, but validating their results can be a laborious task. Computational approaches are used to prioritize between genes putatively involved in cancer, usually based on further analyzing experimental data. Results: We implemented a systematic method using the PIANA software that predicts cancer involvement of genes by integrating heterogeneous datasets. Specifically, we produced lists of genes likely to be involved in cancer by relying on: (i) protein-protein interactions; (ii) differential expression data; and (iii) structural and functional properties of cancer genes. The integrative approach that combines multiple sources of data obtained positive predictive values ranging from 23% (on a list of 811 genes) to 73% (on a list of 22 genes), outperforming the use of any of the data sources alone. We analyze a list of 20 cancer gene predictions, finding that most of them have been recently linked to cancer in literature. Conclusion: Our approach to identifying and prioritizing candidate cancer genes can be used to produce lists of genes likely to be involved in cancer. Our results suggest that differential expression studies yielding high numbers of candidate cancer genes can be filtered using protein interaction networks.