19 resultados para The performance of Christian and pagan storyworlds
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Recent research on the economic performance of women-controlled firms suggests that their underperformance may not result from differences in the managerial ability of women as compared to men, but it can be the result of different levels of start-up resources. Using accounting data, this paper examines the effects that selected start-up conditions have on the economic performance observed in a sample of 4450 Spanish manufacturing firms. The results indicate significant differences regarding the initial conditions, showing lower levels of assets and number of employees what have implications on the economic performance of women-controlled firms.
Resumo:
We present a theoretical framework for determining the short- and long-run effects of infrastructure. While the short-run effects have been the focus of most previous studies, here we derive long-run elasticities by taking into account the adjustment of quasi-fixed inputs to their optimum levels. By considering the impact of infrastructure on private investment decisions, we observe how, apart from the direct effect on costs in the short-run, infrastructure exerts an indirect source of influence in the long-run through their effect on private capital. The model is applied to manufacturing industries in the Spanish regions
Resumo:
We present a theoretical framework for determining the short- and long-run effects of infrastructure. While the short-run effects have been the focus of most previous studies, here we derive long-run elasticities by taking into account the adjustment of quasi-fixed inputs to their optimum levels. By considering the impact of infrastructure on private investment decisions, we observe how, apart from the direct effect on costs in the short-run, infrastructure exerts an indirect source of influence in the long-run through their effect on private capital. The model is applied to manufacturing industries in the Spanish regions
Resumo:
Comentaris referits a l'article següent: K. J. Vinoy, J. K. Abraham, and V. K. Varadan, “On the relationshipbetween fractal dimension and the performance of multi-resonant dipoleantennas using Koch curves,” IEEE Transactions on Antennas and Propagation, 2003, vol. 51, p. 2296–2303.
Resumo:
Following earlier work by Audretsch et al. (2002), we assume that an optimal size-class structure exists, in terms of achieving maximal economic growth rates. Such an optimal structure is likely to exist as economies need a balance between the core competences of large firms (such as exploitation of economies of scale) and those of smaller firms (such as flexibility and exploration of new ideas). Accordingly, changes in size-class structure (i.e., changes in the relative shares in economic activity accounted for by micro, small, medium-sized and large firms) may affect macro-economic growth. Using a unique data base of the EU-27 countries for the period 2002-2008 for five broad sectors of economic activity and four size-classes, we find empirical support which suggests that, on average for these countries over this period, the share of micro and large firms may have been ‘above optimum’ (particularly in lower income EU countries) whereas the share of medium-sized firms may have been ‘below optimum’ (particularly in higher income EU countries). This evidence suggests that the transition from a ‘managed’ to an ‘entrepreneurial’ economy (Audretsch and Thurik, 2001) has not been completed yet in all countries of the EU-27. Keywords: small firms, large firms, size-classes, macro-economic performance
Resumo:
This study is aimed to clarify the association between MDMA cumulative use and cognitive dysfunction, and the potential role of candidate genetic polymorphisms in explaining individual differences in the cognitive effects of MDMA. Gene polymorphisms related to reduced serotonin function, poor competency of executive control and memory consolidation systems, and high enzymatic activity linked to bioactivation of MDMA to neurotoxic metabolites may contribute to explain variations in the cognitive impact of MDMA across regular users of this drug. Sixty ecstasy polydrug users, 110 cannabis users and 93 non-drug users were assessed using cognitive measures of Verbal Memory (California Verbal Learning Test, CVLT), Visual Memory (Rey-Osterrieth Complex Figure Test, ROCFT), Semantic Fluency, and Perceptual Attention (Symbol Digit Modalities Test, SDMT). Participants were also genotyped for polymorphisms within the 5HTT, 5HTR2A, COMT, CYP2D6, BDNF, and GRIN2B genes using polymerase chain reaction and TaqMan polymerase assays. Lifetime cumulative MDMA use was significantly associated with poorer performance on visuospatial memory and perceptual attention. Heavy MDMA users (>100 tablets lifetime use) interacted with candidate gene polymorphisms in explaining individual differences in cognitive performance between MDMA users and controls. MDMA users carrying COMT val/val and SERT s/s had poorer performance than paired controls on visuospatial attention and memory, and MDMA users with CYP2D6 ultra-rapid metabolizers performed worse than controls on semantic fluency. Both MDMA lifetime use and gene-related individual differences influence cognitive dysfunction in ecstasy users.
Resumo:
This paper analyses the performance of companies’ R&D and innovation and the effects of intra- and inter-industry R&D spillover on firms’ productivity in Catalonia. The paper deals simultaneously with the performance of manufacturing and service firms, with the aim of highlighting the growing role of knowledge-intensive services in promoting innovation and productivity gains. We find that intra-industry R&D spillovers have an important effect on the productivity level of manufacturing firms, and the inter-industrial R&D spillovers related to computer and software services also play an important role, especially in high-tech manufacturing industries. The main conclusion is that the traditional classification of manufactured goods and services no longer makes sense in the ‘knowledge economy’ and in Catalonia the regional policy makers will have to design policies that favour inter-industrial R&D flows, especially from high-tech services.
Resumo:
This article provides a theoretical and empirical analysis of a firm's optimal R&D strategy choice. In this paper a firm's R&D strategy is assumed to be endogenous and allowed to depend on both internal firms. characteristics and external factors. Firms choose between two strategies, either they engage in R&D or abstain from own R&D and imitate the outcomes of innovators. In the theoretical model this yields three types of equilibria in which either all firms innovate, some firms innovate and others imitate, or no firm innovates. Firms'equilibrium strategies crucially depend on external factors. We find that the efficiency of intellectual property rights protection positively affects firms'incentives to engage in R&D, while competitive pressure has a negative effect. In addition, smaller firms are found to be more likely to become imitators when the product is homogeneous and the level of spillovers is high. These results are supported by empirical evidence for German .rms from manufacturing and services sectors. Regarding social welfare our results indicate that strengthening intellectual property protection can have an ambiguous effect. In markets characterized by a high rate of innovation a reduction of intellectual property rights protection can discourage innovative performance substantially. However, a reduction of patent protection can also increase social welfare because it may induce imitation. This indicates that policy issues such as the optimal length and breadth of patent protection cannot be resolved without taking into account specific market and firm characteristics. Journal of Economic Literature Classification Numbers: C35, D43, L13, L22, O31. Keywords: Innovation; imitation; spillovers; product differentiation; market competition; intellectual property rights protection.
Resumo:
This paper presents our investigation on iterativedecoding performances of some sparse-graph codes on block-fading Rayleigh channels. The considered code ensembles are standard LDPC codes and Root-LDPC codes, first proposed in and shown to be able to attain the full transmission diversity. We study the iterative threshold performance of those codes as a function of fading gains of the transmission channel and propose a numerical approximation of the iterative threshold versus fading gains, both both LDPC and Root-LDPC codes.Also, we show analytically that, in the case of 2 fading blocks,the iterative threshold root of Root-LDPC codes is proportional to (α1 α2)1, where α1 and α2 are corresponding fading gains.From this result, the full diversity property of Root-LDPC codes immediately follows.
Resumo:
Most methods for small-area estimation are based on composite estimators derived from design- or model-based methods. A composite estimator is a linear combination of a direct and an indirect estimator with weights that usually depend on unknown parameters which need to be estimated. Although model-based small-area estimators are usually based on random-effects models, the assumption of fixed effects is at face value more appropriate.Model-based estimators are justified by the assumption of random (interchangeable) area effects; in practice, however, areas are not interchangeable. In the present paper we empirically assess the quality of several small-area estimators in the setting in which the area effects are treated as fixed. We consider two settings: one that draws samples from a theoretical population, and another that draws samples from an empirical population of a labor force register maintained by the National Institute of Social Security (NISS) of Catalonia. We distinguish two types of composite estimators: a) those that use weights that involve area specific estimates of bias and variance; and, b) those that use weights that involve a common variance and a common squared bias estimate for all the areas. We assess their precision and discuss alternatives to optimizing composite estimation in applications.
Resumo:
Revenue management (RM) is a complicated business process that can best be described ascontrol of sales (using prices, restrictions, or capacity), usually using software as a tool to aiddecisions. RM software can play a mere informative role, supplying analysts with formatted andsummarized data who use it to make control decisions (setting a price or allocating capacity fora price point), or, play a deeper role, automating the decisions process completely, at the otherextreme. The RM models and algorithms in the academic literature by and large concentrateon the latter, completely automated, level of functionality.A firm considering using a new RM model or RM system needs to evaluate its performance.Academic papers justify the performance of their models using simulations, where customerbooking requests are simulated according to some process and model, and the revenue perfor-mance of the algorithm compared to an alternate set of algorithms. Such simulations, whilean accepted part of the academic literature, and indeed providing research insight, often lackcredibility with management. Even methodologically, they are usually awed, as the simula-tions only test \within-model" performance, and say nothing as to the appropriateness of themodel in the first place. Even simulations that test against alternate models or competition arelimited by their inherent necessity on fixing some model as the universe for their testing. Theseproblems are exacerbated with RM models that attempt to model customer purchase behav-ior or competition, as the right models for competitive actions or customer purchases remainsomewhat of a mystery, or at least with no consensus on their validity.How then to validate a model? Putting it another way, we want to show that a particularmodel or algorithm is the cause of a certain improvement to the RM process compared to theexisting process. We take care to emphasize that we want to prove the said model as the causeof performance, and to compare against a (incumbent) process rather than against an alternatemodel.In this paper we describe a \live" testing experiment that we conducted at Iberia Airlineson a set of flights. A set of competing algorithms control a set of flights during adjacentweeks, and their behavior and results are observed over a relatively long period of time (9months). In parallel, a group of control flights were managed using the traditional mix of manualand algorithmic control (incumbent system). Such \sandbox" testing, while common at manylarge internet search and e-commerce companies is relatively rare in the revenue managementarea. Sandbox testing has an undisputable model of customer behavior but the experimentaldesign and analysis of results is less clear. In this paper we describe the philosophy behind theexperiment, the organizational challenges, the design and setup of the experiment, and outlinethe analysis of the results. This paper is a complement to a (more technical) related paper thatdescribes the econometrics and statistical analysis of the results.
Resumo:
An analytical model of an amorphous silicon p-i-n solar cell is presented to describe its photovoltaic behavior under short-circuit conditions. It has been developed from the analysis of numerical simulation results. These results reproduce the experimental illumination dependence of short-circuit resistance, which is the reciprocal slope of the I(V) curve at the short-circuit point. The recombination rate profiles show that recombination in the regions of charged defects near the p-i and i-n interfaces should not be overlooked. Based on the interpretation of the numerical solutions, we deduce analytical expressions for the recombination current and short-circuit resistance. These expressions are given as a function of an effective ¿¿ product, which depends on the intensity of illumination. We also study the effect of surface recombination with simple expressions that describe its influence on current loss and short-circuit resistance.
Resumo:
Background: In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. Methods: For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox"s proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. Results: We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. Conclusions: All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically.
Resumo:
We present a detailed evaluation of the seasonal performance of the Community Multiscale Air Quality (CMAQ) modelling system and the PSU/NCAR meteorological model coupled to a new Numerical Emission Model for Air Quality (MNEQA). The combined system simulates air quality at a fine resolution (3 km as horizontal resolution and 1 h as temporal resolution) in north-eastern Spain, where problems of ozone pollution are frequent. An extensive database compiled over two periods, from May to September 2009 and 2010, is used to evaluate meteorological simulations and chemical outputs. Our results indicate that the model accurately reproduces hourly and 1-h and 8-h maximum ozone surface concentrations measured at the air quality stations, as statistical values fall within the EPA and EU recommendations. However, to further improve forecast accuracy, three simple bias-adjustment techniques mean subtraction (MS), ratio adjustment (RA), and hybrid forecast (HF) based on 10 days of available comparisons are applied. The results show that the MS technique performed better than RA or HF, although all the bias-adjustment techniques significantly reduce the systematic errors in ozone forecasts.
Resumo:
Background: In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. Methods: For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox"s proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. Results: We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. Conclusions: All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically.