275 resultados para Intracampus Borrowing
Resumo:
This paper deals with the issue of terminology and conceptualization in the construction of the semiotic metalanguage. The starting point of this discussion is the scientific thought of C. S. Pierce, R. Bastide and É. Benveniste, who understand that the creation of a new science requires a new and precise terminology. This position is relativized and magnified by the point of view of A. J. Greimas, who defends the centrality of the conceptualization rather than the formalization and the terminological production. Thus, this study proposes and analyses two types of metalinguistic elaboration based on the dialogue with previous theories and disciplines: the borrowing and the redefining.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The significance of the works by Venezuelan-born composer Paul Desenne lies in his unique compositional style that incorporates elements of Latin American folk, pop, and traditional music within the framework of the Western European tradition. His works, though easily classified as art music, nevertheless gain much of their emotional and referential meaning through this rich borrowing. This document focuses on three of Desenne’s flute pieces: the Solo Flute Sonata (2001), Gurrufío for flute orchestra (1997), and Guasa Macabra for flute and clarinet (2003). It provides an analysis of the three works, examining formal, structural, motivic, and rhythmic aspects. Scores and interviews with the composer have been employed as primary sources. Bibliographical material closely related to his music and other secondary sources support this analytical approach. This document also provides an introduction and stylistic discussion of Desenne’s other pieces that incorporate the flute. Chapter one consists of an introduction to Desenne’s life and general considerations of his musical style. Each of the following three chapters focuses on one the three aforementioned flute works, including information about the composition and premiere of each piece as well as analysis and an examination of its incorporation of traditional folk elements. The final chapter presents an introduction to and stylistic discussion of the other flute pieces by this composer. This study intends to provide a basic understanding of Desenne’s flute music, including general characteristics of his musical style, paving the way for further investigation of Desenne’s music, and flute music in particular.
Resumo:
This thesis focuses on the limits that may prevent an entrepreneur from maximizing her value, and the benefits of diversification in reducing her cost of capital. After reviewing all relevant literature dealing with the differences between traditional corporate finance and entrepreneurial finance, we focus on the biases occurring when traditional finance techniques are applied to the entrepreneurial context. In particular, using the portfolio theory framework, we determine the degree of under-diversification of entrepreneurs. Borrowing the methodology developed by Kerins et al. (2004), we test a model for the cost of capital according to the firms' industry and the entrepreneur's wealth commitment to the firm. This model takes three market inputs (standard deviation of market returns, expected return of the market, and risk-free rate), and two firm-specific inputs (standard deviation of the firm returns and correlation between firm and market returns) as parameters, and returns an appropriate cost of capital as an output. We determine the expected market return and the risk-free rate according to the huge literature on the market risk premium. As for the market return volatility, it is estimated considering a GARCH specification for the market index returns. Furthermore, we assume that the firm-specific inputs can be obtained considering new-listed firms similar in risk to the firm we are evaluating. After we form a database including all the data needed for our analysis, we perform an empirical investigation to understand how much of the firm's total risk depends on market risk, and which explanatory variables can explain it. Our results show that cost of capital declines as the level of entrepreneur's commitment decreases. Therefore, maximizing the value for the entrepreneur depends on the fraction of entrepreneur's wealth invested in the firm and the fraction she sells to outside investors. These results are interesting both for entrepreneurs and policy makers: the former can benefit from an unbiased model for their valuation; the latter can obtain some guidelines to overcome the recent financial market crisis.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
La regolazione dei sistemi di propulsione a razzo a propellente solido (Solid Rocket Motors) ha da sempre rappresentato una delle principali problematiche legate a questa tipologia di motori. L’assenza di un qualsiasi genere di controllo diretto del processo di combustione del grano solido, fa si che la previsione della balistica interna rappresenti da sempre il principale strumento utilizzato sia per definire in fase di progetto la configurazione ottimale del motore, sia per analizzare le eventuali anomalie riscontrate in ambito sperimentale. Variazioni locali nella struttura del propellente, difettosità interne o eterogeneità nelle condizioni di camera posso dare origine ad alterazioni del rateo locale di combustione del propellente e conseguentemente a profili di pressione e di spinta sperimentali differenti da quelli previsti per via teorica. Molti dei codici attualmente in uso offrono un approccio piuttosto semplificato al problema, facendo per lo più ricorso a fattori correttivi (fattori HUMP) semi-empirici, senza tuttavia andare a ricostruire in maniera più realistica le eterogeneità di prestazione del propellente. Questo lavoro di tesi vuole dunque proporre un nuovo approccio alla previsione numerica delle prestazioni dei sistemi a propellente solido, attraverso la realizzazione di un nuovo codice di simulazione, denominato ROBOOST (ROcket BOOst Simulation Tool). Richiamando concetti e techiche proprie della Computer Grafica, questo nuovo codice è in grado di ricostruire in processo di regressione superficiale del grano in maniera puntuale, attraverso l’utilizzo di una mesh triangolare mobile. Variazioni locali del rateo di combustione posso quindi essere facilmente riprodotte ed il calcolo della balistica interna avviene mediante l’accoppiamento di un modello 0D non-stazionario e di uno 1D quasi-stazionario. L’attività è stata svolta in collaborazione con l’azienda Avio Space Division e il nuovo codice è stato implementato con successo sul motore Zefiro 9.
Resumo:
In an effort to reduce Interlibrary borrowing activity, while enhancing the Library collection, the Bertrand Library has initiated a program to purchase current monographs requested through ILL by Bucknell University students and faculty. The results have been a successful reduction in ILL workload, and a cost-effective means of document delivery as measured by average delivery time, cost-per-title, processing costs, and circulation statistics. This procedure reflects an overall change in our philosophy concerning document access and delivery, which led to the reorganization of ILL services and staff in the Bertrand Library.
Resumo:
Since the late eighties, economists have been regarding the transition from command to market economies in Central and Eastern Europe with intense interest. In addition to studying the transition per se, they have begun using the region as a testing ground on which to investigate the validity of certain classic economic propositions. In his research, comprising three articles written in English and totalling 40 pages, Mr. Hanousek uses the so-called "Czech national experiment" (voucher privatisation scheme) to test the permanent income hypothesis (PIH). He took as his inspiration Kreinin's recommendation: "Since data concerning the behaviour of windfall income recipients is relatively scanty, and since such data can constitute an important test of the permanent income hypothesis, it is of interest to bring to bear on the hypothesis whatever information is available". Mr. Hanousek argues that, since the transfer of property to Czech citizens from 1992 to 1994 through the voucher scheme was not anticipated, it can be regarded as windfall income. The average size of the windfall was more than three month's salary and over 60 percent of the Czech population received this unexpected income. Furthermore, there are other reasons for conducting such an analysis in the Czech Republic. Firstly, the privatisation process took place quickly. Secondly, both the economy and consumer behaviour have been very stable. Thirdly, out of a total population of 10 million Czech citizens, an astonishing 6 million, that is, virtually every household, participated in the scheme. Thus Czech voucher privatisation provides a sample for testing the PIH almost equivalent to a full population, thus avoiding problems with the distribution of windfalls. Compare this, for instance with the fact that only 4% of the Israeli urban population received personal restitution from Germany, while the number of veterans who received the National Service Life Insurance Dividends amounted to less than 9% of the US population and were concentrated in certain age groups. But to begin with, Mr. Hanousek considers the question of whether the public percieves the transfer from the state to individual as an increase in net wealth. It can be argued that the state is only divesting itself of assets that would otherwise provide a future source of transfers. According to this argument, assigning these assets to individuals creates an offsetting change in the present value of potential future transfers so that individuals are no better off after the transfer. Mr. Hanousek disagrees with this approach. He points out that a change in the ownership of inefficient state-owned enterprises should lead to higher efficiency, which alone increases the value of enterprises and creates a windfall increase in citizens' portfolios. More importantly, the state and individuals had very different preferences during the transition. Despite government propaganda, it is doubtful that citizens of former communist countries viewed government-owned enterprises as being operated in the citizens' best interest. Moreover, it is unlikely that the public fully comprehended the sophisticated links between the state budget, state-owned enterprises, and transfers to individuals. Finally, the transfers were not equal across the population. Mr. Hanousek conducted a survey on 1263 individuals, dividing them into four monthly earnings categories. After determining whether the respondent had participated in the voucher process, he asked those who had how much of what they received from voucher privatisation had been (a) spent on goods and services, (b) invested elsewhere, (c) transferred to newly emerging pension funds, (d) given to a family member, and (e) retained in their original form as an investment. Both the mean and the variance of the windfall rise with income. He obtained similar results with respect to education, where the mean (median) windfall for those with a basic school education was 13,600 Czech Crowns (CZK), a figure that increased to 15,000 CZK for those with a high school education without exams, 19,900 CZK for high school graduates with exams, and 24,600 CZK for university graduates. Mr. Hanousek concludes that it can be argued that higher income (and better educated) groups allocated their vouchers or timed the disposition of their shares better. He turns next to an analysis of how respondents reported using their windfalls. The key result is that only a relatively small number of individuals reported spending on goods. Overall, the results provide strong support for the permanent income hypothesis, the only apparent deviation being the fact that both men and women aged 26 to 35 apparently consume more than they should if the windfall were annuitised. This finding is still fully consistent with the PIH, however, if this group is at a stage in their life-cycle where, without the windfall, they would be borrowing to finance consumption associated with family formation etc. Indeed, the PIH predicts that individuals who would otherwise borrow to finance consumption would consume the windfall up to the level equal to the annuitised fraction of the increase in lifetime income plus the full amount of the previously planned borrowing for consumption. Greater consumption would then be financed, not from investing the windfall, but from avoidance of future repayment obligations for debts that would have been incurred without the windfall.
Resumo:
The transition in Central and Eastern Europe since the late 1980s has provided a testing ground for classic propositions. This project looked at the impact of privatisation on private consumption, using the Czech experiment of voucher privatisation to test the permanent income hypothesis. This form of privatisation moved state assets to individuals and represented an unexpected windfall gain for participants in the scheme. Whether the windfall was consumed or saved offers a clear test of the permanent income hypothesis. Of a total population of 10 million, 6 million Czechs, i.e. virtually every household, participated in the scheme,. In a January 1996 survey, 1263 individuals were interviewed , 75% of whom had taken part. The data obtained suggests that only a small quantity of transferred assets were cashed in and spent on consumption, providing support for the permanent income hypothesis. The fraction of the windfall consumed grows with age, as would be predicted from the lower life expectancy of older consumers. The most interesting deviation was for people aged 26 to 35, who apparently consumed more that they would if the windfall were annuitised. As these people are at the stage in their lives when they would otherwise be borrowing to cover consumption related to establishing a family, etc., this is however consistent with the permanent income hypothesis, which predicts that individuals who would otherwise borrow money would use the windfall to avoid doing so.
Resumo:
Ms. Neumer and her team began their project with a critical analysis of the various theories of the relationship between language and thought. Their aim was to develop a theoretical position concerning the issue of universalism versus relativism. This issue is closely bound up with one of the main questions of the history of East and Central Europe, namely, the question of the nation, and the possibility of mutual understanding between national cultures. The team attempted to avoid falling into an all-too-common trap, that of allowing a political perspective to obscure the central theoretical issues. In a project whose outcome totalled over 1000 pages of manuscript in German, English and Hungarian, they touched on cognitive psychological, linguistic, semiotic, socio-semiotic, and other such themes. Their experience has convinced them of the fruitful heuristic possibilities of the interaction of scientific and philosophical approaches in this area of research. A preliminary analysis of the history of philosophy and inquiries into conceptual fields revealed that, in order to reach strong relativist conclusions concerning the unity of thought and language, it is required to take as a point of departure the widest possible sense of these concepts. But in fact, such an option ends up refuting itself: pursuing the premises to their final conclusion one arrives at the restriction of relativism. The team outlined a theory of the understanding of the Other which, borrowing from analytical as well as continental-hermeneutic trends, does not underestimate, on the one hand, the difficulties of understanding between various forms of life, cultures, and languages, but, on the other hand, can provide an alternative solution to the theory of incommensurabiltiy. Within the boundary of this problematic the team studied the problems of translatability, the acquisition of the mother and foreign languages, and natural or cultural determinacy of kind terms. The team regards its most original contribution to be the association of the problem of relativism-universalism and the language-thought relation with contemporary investigations into the question of orality, literacy, and secondary orality. Their conclusion was that, although certain connections can be revealed both between forms of communication and the thesis of the unity of language and thought, and between periods in the history of communication and the predominance of relativistic or universalistic tendencies, forms of communication do not unequivocally determine the answers to these questions.
Resumo:
Part I What makes science hard for newcomers? 1) The background (briefly) of my research - (why the math anxiety model doesn’t fit) 2) The Tier analysis (a visual) – message: there are many types of science learners in your class than simply younger versions of yourself 3) Three approaches (bio, chem, physics) but only one Nature 4) The (different) vocabularies of the three Sciences 5) How mathematics is variously used in Science Part II Rules and rules-driven assignments- lQ vs OQ1) How to incorporate creativity into assignments and tests? 2) Tests- borrowing “thought questions" from other fields (If Columbus hadn't discovered the new World, when and under whose law would it have been discovered?) 3) Grading practices (partial credit, post-exam credit for finding and explaining nontrivial errors 4) Icing on the cake – applications, examples of science/engineering from Tuesdays NY Times Part III Making Change at the Departmental Level 1) Taking control of at least some portion of the curriculum 2) Varying style of presentation 3) Taking control of at least some portion of the exams 4) GRADING pros and cons of grading on a curve 5) Updating labs and lab reporting.
Resumo:
This paper describes a simple way to integrate the debt tax shield into an accounting-based valuation model. The market value of equity is determined by forecasting residual operating income, which is calculated by charging operating income for the operating assets at a required return that accounts for the tax benefit that comes from borrowing to raise cash for the operations. The model assumes that the firm maintains a deterministic financial leverage ratio, which tends to converge quickly to typical steady-state levels over time. From a practical point of view, this characteristic is of particular help, because it allows a continuing value calculation at the end of a short forecast period.
Resumo:
The article offers a systematic analysis of the comparative trajectory of international democratic change. In particular, it focuses on the resulting convergence or divergence of political systems, borrowing from the literatures on institutional change and policy convergence. To this end, political-institutional data in line with Arend Lijphart’s (1999, 2012) empirical theory of democracy for 24 developed democracies between 1945 and 2010 are analyzed. Heteroscedastic multilevel models allow for directly modeling the development of the variance of types of democracy over time, revealing information about convergence, and adding substantial explanations. The findings indicate that there has been a trend away from extreme types of democracy in single cases, but no unconditional trend of convergence can be observed. However, there are conditional processes of convergence. In particular, economic globalization and the domestic veto structure interactively influence democratic convergence.
Resumo:
Vector control is the mainstay of malaria control programmes. Successful vector control profoundly relies on accurate information on the target mosquito populations in order to choose the most appropriate intervention for a given mosquito species and to monitor its impact. An impediment to identify mosquito species is the existence of morphologically identical sibling species that play different roles in the transmission of pathogens and parasites. Currently PCR diagnostics are used to distinguish between sibling species. PCR based methods are, however, expensive, time-consuming and their development requires a priori DNA sequence information. Here, we evaluated an inexpensive molecular proteomics approach for Anopheles species: matrix assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS). MALDI-TOF MS is a well developed protein profiling tool for the identification of microorganisms but so far has received little attention as a diagnostic tool in entomology. We measured MS spectra from specimens of 32 laboratory colonies and 2 field populations representing 12 Anopheles species including the A. gambiae species complex. An important step in the study was the advancement and implementation of a bioinformatics approach improving the resolution over previously applied cluster analysis. Borrowing tools for linear discriminant analysis from genomics, MALDI-TOF MS accurately identified taxonomically closely related mosquito species, including the separation between the M and S molecular forms of A. gambiae sensu stricto. The approach also classifies specimens from different laboratory colonies; hence proving also very promising for its use in colony authentication as part of quality assurance in laboratory studies. While being exceptionally accurate and robust, MALDI-TOF MS has several advantages over other typing methods, including simple sample preparation and short processing time. As the method does not require DNA sequence information, data can also be reviewed at any later stage for diagnostic or functional patterns without the need for re-designing and re-processing biological material.
Resumo:
This paper sheds light on an unusual political influence mechanism, i.e. the influence of a non-EU member state on agendas and policies at the level of the EU and EU members states. Borrowing both from the literatures on policy diffusion as well as on the influence of small member states in EU decision-making, we argue that such an influence is fostered by both structural and agency-related factors. We illustrate this potential influence with a case study on the regulation of micropollutants in waterbodies. Adopting a mixed-method approach, we show that the upstream location of Switzerland, its integration into transnational networks as well as joint water basin institutions provides the country with structural opportunities to diffuse policy innovation to the EU’s policy agenda and member states’ policies. In addition, agency-related factors matter as the EU or member states can point to Switzerland as a successful example or pioneer, especially since the Swiss policy is in line with an overall EU strategy on reducing negative impacts of chemicals on humans and the environment.