912 resultados para The performance of Christian and pagan storyworlds
Resumo:
This article provides a theoretical and empirical analysis of a firm's optimal R&D strategy choice. In this paper a firm's R&D strategy is assumed to be endogenous and allowed to depend on both internal firms. characteristics and external factors. Firms choose between two strategies, either they engage in R&D or abstain from own R&D and imitate the outcomes of innovators. In the theoretical model this yields three types of equilibria in which either all firms innovate, some firms innovate and others imitate, or no firm innovates. Firms'equilibrium strategies crucially depend on external factors. We find that the efficiency of intellectual property rights protection positively affects firms'incentives to engage in R&D, while competitive pressure has a negative effect. In addition, smaller firms are found to be more likely to become imitators when the product is homogeneous and the level of spillovers is high. These results are supported by empirical evidence for German .rms from manufacturing and services sectors. Regarding social welfare our results indicate that strengthening intellectual property protection can have an ambiguous effect. In markets characterized by a high rate of innovation a reduction of intellectual property rights protection can discourage innovative performance substantially. However, a reduction of patent protection can also increase social welfare because it may induce imitation. This indicates that policy issues such as the optimal length and breadth of patent protection cannot be resolved without taking into account specific market and firm characteristics. Journal of Economic Literature Classification Numbers: C35, D43, L13, L22, O31. Keywords: Innovation; imitation; spillovers; product differentiation; market competition; intellectual property rights protection.
Resumo:
In Nigeria, schistosomiasis, caused predominantly by the species Schistosoma haematobium, is highly endemic in resource-poor communities. We performed a school-based survey in two rural communities in Osun State (Southwestern Nigeria) and assessed macrohaematuria, microhaematuria and proteinuria as indirect indicators for the presence of disease. Urine samples were inspected macroscopically for haematuria and screened for microhaematuria and proteinuria using urine reagent strips. The microscopic examination of schistosome eggs was used as the gold standard for diagnosis. In total, 447 schoolchildren were included in this study and had a 51% prevalence of urinary schistosomiasis. The sensitivity of microhaematuria (68%) and proteinuria (53%) for infection with S. haematobium was relatively low. In patients with a heavy infection (>500 eggs/10 mL), the sensitivity of microhaematuria was high (95%). When the presence of macrohaematuria and the concomitant presence of microhaematuria and proteinuria were combined, it revealed a sensitivity of 63%, a specificity of 93% and a positive predictive value of 91%. Macrohaematuria also showed high specificity (96%) and a positive predictive value of 92%, while sensitivity was < 50%. These data show that combining urine reagent strip tests (presence of proteinuria and microhaematuria) and information on macrohaematuria increased the accuracy of the rapid diagnosis of urinary schistosomiasis in an endemic rural West African setting. This simple approach can be used to increase the quality of monitoring of schistosomiasis in schoolchildren.
Resumo:
Paracoccidioidomycosis is diagnosed from the direct observation of the causative agent, but serology can facilitate and decrease the time required for diagnosis. The objective of this study was to determine the influence of serum sample inactivation on the performance of the latex agglutination test (LAT) for detecting antibodies against Paracoccidioides brasiliensis. The sensitivity of LAT from inactivated or non-inactivated samples was 73% and 83%, respectively and the LAT selectivity was 79% and 90%, respectively. The LAT evaluated here was no more specific than the double-immunodiffusion assay. We suggest the investigation of other methods for improving the LAT, such as the use of deglycosylated antigen.
Resumo:
Orally transmitted Chagas disease (ChD), which is a well-known entity in the Brazilian Amazon Region, was first documented in Venezuela in December 2007, when 103 people attending an urban public school in Caracas became infected by ingesting juice that was contaminated with Trypanosoma cruzi. The infection occurred 45-50 days prior to the initiation of the sampling performed in the current study. Parasitological methods were used to diagnose the first nine symptomatic patients; T. cruzi was found in all of them. However, because this outbreak was managed as a sudden emergency during Christmas time, we needed to rapidly evaluate 1,000 people at risk, so we decided to use conventional serology to detect specific IgM and IgG antibodies via ELISA as well as indirect haemagglutination, which produced positive test results for 9.1%, 11.9% and 9.9% of the individuals tested, respectively. In other more restricted patient groups, polymerase chain reaction (PCR) provided more sensitive results (80.4%) than blood cultures (16.2%) and animal inoculations (11.6%). Although the classical diagnosis of acute ChD is mainly based on parasitological findings, highly sensitive and specific serological techniques can provide rapid results during large and severe outbreaks, as described herein. The use of these serological techniques allows prompt treatment of all individuals suspected of being infected, resulting in reduced rates of morbidity and mortality.
Resumo:
Many animals that live in groups maintain competitive relationships, yet avoid continual fighting, by forming dominance hierarchies. We compare predictions of stochastic, individual-based models with empirical experimental evidence using shore crabs to test competing hypotheses regarding hierarchy development. The models test (1) what information individuals use when deciding to fight or retreat, (2) how past experience affects current resource-holding potential, and (3) how individuals deal with changes to the social environment. First, we conclude that crabs assess only their own state and not their opponent's when deciding to fight or retreat. Second, willingness to enter, and performance in, aggressive contests are influenced by previous contest outcomes. Winning increases the likelihood of both fighting and winning future interactions, while losing has the opposite effect. Third, when groups with established dominance hierarchies dissolve and new groups form, individuals reassess their ranks, showing no memory of previous rank or group affiliation. With every change in group composition, individuals fight for their new ranks. This iterative process carries over as groups dissolve and form, which has important implications for the relationship between ability and hierarchy rank. We conclude that dominance hierarchies emerge through an interaction of individual and social factors, and discuss these findings in terms of an underlying mechanism. Overall, our results are consistent with crabs using a cumulative assessment strategy iterated across changes in group composition, in which aggression is constrained by an absolute threshold in energy spent and damage received while fighting.
Resumo:
This article discusses the construction of tri-sector partnerships in three projects conducted in Brazil in different fields of intervention of public policy (access to water, basic education and performance of boards of rights of children and adolescents). Collaborative articulations involving the players from three sectors (the State, civil society and the market) are practices that are little studied in the Brazilian and even in the international context, as tri-sector partnerships are rare, despite the proliferation of lines of discourse in support of alliances between governments and civil society or between companies and NGOs in the management of public policy. As a research strategy, this study resorted to cooperative inquiry, a method that involves breaking down the boundaries between the subjects and the objects of the analysis. Besides working toward a better understanding of the challenges of building tri-sector partnerships in the Brazilian context, the article also tries to show the relevance to public policy studies of investigative methods based on the subjects studied, as a means of developing an understanding of the practices, lines of discourse and dilemmas linked to social action in social programs.
Resumo:
Among the various work stress models, one of the most popular has been the job demands-control (JDC) model developed by Karasek (1979), which postulates that work-related strain is highest under work conditions characterized by high demands and low autonomy. The absence of social support at work further increases negative outcomes. This model, however, does not apply equally to all individuals and to all cultures. This review demonstrates how various individual characteristics, especially some personality dimensions, influence the JDC model and could thus be considered buffering or moderator factors. Moreover, we review how the cultural context impacts this model as suggested by results obtained in European, American, and Asian contexts. Yet there are almost no data from Africa or South America. More crosscultural studies including populations from these continents would be valuable for a better understanding of the impact of the cultural context on the JDC model.
Resumo:
In the decade of the 1990s, China’s feed sector became increasingly privatized, more feed mills opened, and the scale of operation expanded. Capacity utilization remained low and multi-ministerial supervision was still prevalent, but the feed mill sector showed a positive performance overall, posting a growth rate of 11 percent per year. Profit margin over sales was within allowable rates set by the government of China at 3 to 5 percent. Financial efficiency improved, with a 20 percent quicker turnover of working capital. Average technical efficiency was 0.805, as more efficient feed mills increasingly gained production shares. This study finds evidence that the increasing privatization explains the improved performance of the commercial feed mill sector. The drivers that shaped the feed mill sector in the 1990s have changed with China’s accession to the World Trade Organization. With the new policy regime in place, the study foresees that, assuming an adequate supply of soy meal and an excess capacity in the feed mill sector, it is likely that China will allow corn imports up to the tariff rate quota (TRQ) of 7.2 mmt since the in-quota rate is very low at 1 percent. However, when the TRQ is exceeded, the import duty jumps to a prohibitive out-quota rate of 65 percent. With an import duty for meat of only 10 to 12 percent, China would have a strong incentive to import meat products directly rather than bringing in expensive corn to produce meat domestically. This would be further reinforced if structural transformation in the swine sector would narrow the cost differential between domestic and imported pork.
Resumo:
This paper presents our investigation on iterativedecoding performances of some sparse-graph codes on block-fading Rayleigh channels. The considered code ensembles are standard LDPC codes and Root-LDPC codes, first proposed in and shown to be able to attain the full transmission diversity. We study the iterative threshold performance of those codes as a function of fading gains of the transmission channel and propose a numerical approximation of the iterative threshold versus fading gains, both both LDPC and Root-LDPC codes.Also, we show analytically that, in the case of 2 fading blocks,the iterative threshold root of Root-LDPC codes is proportional to (α1 α2)1, where α1 and α2 are corresponding fading gains.From this result, the full diversity property of Root-LDPC codes immediately follows.
Resumo:
OBJECTIVETo search for evidence of the efficiency of sodium hypochlorite on environmental surfaces in reducing contamination and prevention of healthcare-associated infection HAIs.METHODSystematic review in accordance with the Cochrane Collaboration.RESULTSWe analyzed 14 studies, all controlled trials, published between 1989-2013. Most studies resulted in inhibition of microorganism growth. Some decreased infection, microorganism resistance and colonization, loss of efficiency in the presence of dirty and surface-dried viruses.CONCLUSIONThe hypochlorite is an effective disinfectant, however, the issue of the direct relation with the reduction of HAIs remains. The absence of control for confounding variables in the analyzed studies made the meta-analysis performance inadequate. The evaluation of internal validity using CONSORT and TREND was not possible because its contents were not appropriate to laboratory and microbiological studies. As a result, there is an urgent need for developing specific protocol for evaluating such studies.
Resumo:
Most methods for small-area estimation are based on composite estimators derived from design- or model-based methods. A composite estimator is a linear combination of a direct and an indirect estimator with weights that usually depend on unknown parameters which need to be estimated. Although model-based small-area estimators are usually based on random-effects models, the assumption of fixed effects is at face value more appropriate.Model-based estimators are justified by the assumption of random (interchangeable) area effects; in practice, however, areas are not interchangeable. In the present paper we empirically assess the quality of several small-area estimators in the setting in which the area effects are treated as fixed. We consider two settings: one that draws samples from a theoretical population, and another that draws samples from an empirical population of a labor force register maintained by the National Institute of Social Security (NISS) of Catalonia. We distinguish two types of composite estimators: a) those that use weights that involve area specific estimates of bias and variance; and, b) those that use weights that involve a common variance and a common squared bias estimate for all the areas. We assess their precision and discuss alternatives to optimizing composite estimation in applications.
Resumo:
A procedure using a chirobiotic V column is presented which allows separation of the enantiomers of citalopram and its two N-demethylated metabolites, and of the internal standard, alprenolol, in human plasma. Citalopram, demethylcitalopram and didemethylcitalopram, as well as the internal standard, were recovered from plasma by liquid-liquid extraction. The limits of quantification were found to be 5 ng/ml for each enantiomer of citalopram and demethylcitalopram, and 7.5 ng/ml for each enantiomer of didemethylcitalopram. Inter- and intra-day coefficients of variation varied from 2.4% to 8.6% for S- and R-citalopram, from 2.9% to 7.4% for S- and R-demethylcitalopram, and from 5.6% to 12.4% for S- and R- didemethylcitalopram. No interference was observed from endogenous compounds following the extraction of plasma samples from 10 different patients treated with citalopram. This method allows accurate quantification for each enantiomer and is, therefore, well suited for pharmacokinetic and drug interaction investigations. The presented method replaces a previously described highly sensitive and selective high-performance liquid chromatography procedure using an acetylated 3-cyclobond column which, because of manufactural problems, is no longer usable for the separation of the enantiomers of citalopram and its demethylated metabolites.
Resumo:
Revenue management (RM) is a complicated business process that can best be described ascontrol of sales (using prices, restrictions, or capacity), usually using software as a tool to aiddecisions. RM software can play a mere informative role, supplying analysts with formatted andsummarized data who use it to make control decisions (setting a price or allocating capacity fora price point), or, play a deeper role, automating the decisions process completely, at the otherextreme. The RM models and algorithms in the academic literature by and large concentrateon the latter, completely automated, level of functionality.A firm considering using a new RM model or RM system needs to evaluate its performance.Academic papers justify the performance of their models using simulations, where customerbooking requests are simulated according to some process and model, and the revenue perfor-mance of the algorithm compared to an alternate set of algorithms. Such simulations, whilean accepted part of the academic literature, and indeed providing research insight, often lackcredibility with management. Even methodologically, they are usually awed, as the simula-tions only test \within-model" performance, and say nothing as to the appropriateness of themodel in the first place. Even simulations that test against alternate models or competition arelimited by their inherent necessity on fixing some model as the universe for their testing. Theseproblems are exacerbated with RM models that attempt to model customer purchase behav-ior or competition, as the right models for competitive actions or customer purchases remainsomewhat of a mystery, or at least with no consensus on their validity.How then to validate a model? Putting it another way, we want to show that a particularmodel or algorithm is the cause of a certain improvement to the RM process compared to theexisting process. We take care to emphasize that we want to prove the said model as the causeof performance, and to compare against a (incumbent) process rather than against an alternatemodel.In this paper we describe a \live" testing experiment that we conducted at Iberia Airlineson a set of flights. A set of competing algorithms control a set of flights during adjacentweeks, and their behavior and results are observed over a relatively long period of time (9months). In parallel, a group of control flights were managed using the traditional mix of manualand algorithmic control (incumbent system). Such \sandbox" testing, while common at manylarge internet search and e-commerce companies is relatively rare in the revenue managementarea. Sandbox testing has an undisputable model of customer behavior but the experimentaldesign and analysis of results is less clear. In this paper we describe the philosophy behind theexperiment, the organizational challenges, the design and setup of the experiment, and outlinethe analysis of the results. This paper is a complement to a (more technical) related paper thatdescribes the econometrics and statistical analysis of the results.
Resumo:
BACKGROUND: Previous published studies have shown significant variations in colonoscopy performance, even when medical factors are taken into account. This study aimed to examine the role of nonmedical factors (ie, embodied in health care system design) as possible contributors to variations in colonoscopy performance. METHODS: Patient data from a multicenter observational study conducted between 2000 and 2002 in 21 centers in 11 western countries were used. Variability was captured through 2 performance outcomes (diagnostic yield and colonoscopy withdrawal time), jointly studied as dependent variables, using a multilevel 2-equation system. RESULTS: Results showed that open-access systems and high-volume colonoscopy centers were independently associated with a higher likelihood of detecting significant lesions and longer withdrawal durations. Fee for service (FFS) payment was associated with shorter withdrawal durations, and so had an indirect negative impact on the diagnostic yield. Teaching centers exhibited lower detection rates and longer withdrawal times. CONCLUSIONS: Our results suggest that gatekeeping colonoscopy is likely to miss patients with significant lesions and that developing specialized colonoscopy units is important to improve performance. Results also suggest that FFS may result in a lower quality of care in colonoscopy practice and highlight the fact that longer withdrawal times do not necessarily indicate higher quality in teaching centers.