842 resultados para hidden borrowing
Resumo:
Europe has responded to the crisis with strengthened budgetary and macroeconomic surveillance, the creation of the European Stability Mechanism, liquidity provisioning by resilient economies and the European Central Bank and a process towards a banking union. However, a monetary union requires some form of budget for fiscal stabilisation in case of shocks, and as a backstop to the banking union. This paper compares four quantitatively different schemes of fiscal stabilisation and proposes a new scheme based on GDP-indexed bonds. The options considered are: (i) A federal budget with unemployment and corporate taxes shifted to euro-area level; (ii) a support scheme based on deviations from potential output;(iii) an insurance scheme via which governments would issue bonds indexed to GDP, and (iv) a scheme in which access to jointly guaranteed borrowing is combined with gradual withdrawal of fiscal sovereignty. Our comparison is based on strong assumptions. We carry out a preliminary, limited simulation of how the debt-to-GDP ratio would have developed between 2008-14 under the four schemes for Greece, Ireland, Portugal, Spain and an ‘average’ country.The schemes have varying implications in each case for debt sustainability
Resumo:
Despite its relevance to a wide range of technological and fundamental areas, a quantitative understanding of protein surface clustering dynamics is often lacking. In inorganic crystal growth, surface clustering of adatoms is well described by diffusion-aggregation models. In such models, the statistical properties of the aggregate arrays often reveal the molecular scale aggregation processes. We investigate the potential of these theories to reveal hitherto hidden facets of protein clustering by carrying out concomitant observations of lysozyme adsorption onto mica surfaces, using atomic force microscopy. and Monte Carlo simulations of cluster nucleation and growth. We find that lysozyme clusters diffuse across the substrate at a rate that varies inversely with size. This result suggests which molecular scale mechanisms are responsible for the mobility of the proteins on the substrate. In addition the surface diffusion coefficient of the monomer can also be extracted from the comparison between experiments and simulations. While concentrating on a model system of lysozyme-on-mica, this 'proof of concept' study successfully demonstrates the potential of our approach to understand and influence more biomedically applicable protein-substrate couples.
Resumo:
This paper reports three experiments that examine the role of similarity processing in McGeorge and Burton's (1990) incidental learning task. In the experiments subjects performed a distractor task involving four-digit number strings, all of which conformed to a simple hidden rule. They were then given a forced-choice memory test in which they were presented with pairs of strings and were led to believe that one string of each pair had appeared in the prior learning phase. Although this was not the case, one string of each pair did conform to the hidden rule. Experiment 1 showed that, as in the McGeorge and Burton study, subjects were significantly more likely to select test strings that conformed to the hidden rule. However, additional analyses suggested that rather than having implicitly abstracted the rule, subjects may have been selecting strings that were in some way similar to those seen during the learning phase. Experiments 2 and 3 were designed to try to separate out effects due to similarity from those due to implicit rule abstraction. It was found that the results were more consistent with a similarity-based model than implicit rule abstraction per se.
Resumo:
This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.
Resumo:
A dynamic, deterministic, economic simulation model was developed to estimate the costs and benefits of controlling Mycobacterium avium subsp. paratuberculosis (Johne's disease) in a suckler beef herd. The model is intended as a demonstration tool for veterinarians to use with farmers. The model design process involved user consultation and participation and the model is freely accessible on a dedicated website. The 'user-friendly' model interface allows the input of key assumptions and farm specific parameters enabling model simulations to be tailored to individual farm circumstances. The model simulates the effect of Johne's disease and various measures for its control in terms of herd prevalence and the shedding states of animals within the herd, the financial costs of the disease and of any control measures and the likely benefits of control of Johne's disease for the beef suckler herd over a 10-year period. The model thus helps to make more transparent the 'hidden costs' of Johne's in a herd and the likely benefits to be gained from controlling the disease. The control strategies considered within the model are 'no control', 'testing and culling of diagnosed animals', 'improving management measures' or a dual strategy of 'testing and culling in association with improving management measures'. An example 'run' of the model shows that the strategy 'improving management measures', which reduces infection routes during the early stages, results in a marked fall in herd prevalence and total costs. Testing and culling does little to reduce prevalence and does not reduce total costs over the 10-year period.
Resumo:
The mechanism of action and properties of a solid-phase ligand library made of hexapeptides (combinatorial peptide ligand libraries or CPLL), for capturing the "hidden proteome", i.e. the low- and very low-abundance proteins constituting the vast majority of species in any proteome, as applied to plant tissues, are reviewed here. Plant tissues are notoriously recalcitrant to protein extraction and to proteome analysis. Firstly, rigid plant cell walls need to be mechanically disrupted to release the cell content and, in addition to their poor protein yield, plant tissues are rich in proteases and oxidative enzymes, contain phenolic compounds, starches, oils, pigments and secondary metabolites that massively contaminate protein extracts. In addition, complex matrices of polysaccharides, including large amount of anionic pectins, are present. All these species compete with the binding of proteins to the CPLL beads, impeding proper capture and identification / detection of low-abundance species. When properly pre-treated, plant tissue extracts are amenable to capture by the CPLL beads revealing thus many new species among them low-abundance proteins. Examples are given on the treatment of leaf proteins, of corn seed extracts and of exudate proteins (latex from Hevea brasiliensis). In all cases, the detection of unique gene products via CPLL capture is at least twice that of control, untreated sample.
Resumo:
This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.
Resumo:
We investigate the performance of phylogenetic mixture models in reducing a well-known and pervasive artifact of phylogenetic inference known as the node-density effect, comparing them to partitioned analyses of the same data. The node-density effect refers to the tendency for the amount of evolutionary change in longer branches of phylogenies to be underestimated compared to that in regions of the tree where there are more nodes and thus branches are typically shorter. Mixture models allow more than one model of sequence evolution to describe the sites in an alignment without prior knowledge of the evolutionary processes that characterize the data or how they correspond to different sites. If multiple evolutionary patterns are common in sequence evolution, mixture models may be capable of reducing node-density effects by characterizing the evolutionary processes more accurately. In gene-sequence alignments simulated to have heterogeneous patterns of evolution, we find that mixture models can reduce node-density effects to negligible levels or remove them altogether, performing as well as partitioned analyses based on the known simulated patterns. The mixture models achieve this without knowledge of the patterns that generated the data and even in some cases without specifying the full or true model of sequence evolution known to underlie the data. The latter result is especially important in real applications, as the true model of evolution is seldom known. We find the same patterns of results for two real data sets with evidence of complex patterns of sequence evolution: mixture models substantially reduced node-density effects and returned better likelihoods compared to partitioning models specifically fitted to these data. We suggest that the presence of more than one pattern of evolution in the data is a common source of error in phylogenetic inference and that mixture models can often detect these patterns even without prior knowledge of their presence in the data. Routine use of mixture models alongside other approaches to phylogenetic inference may often reveal hidden or unexpected patterns of sequence evolution and can improve phylogenetic inference.
Resumo:
The mechanism of action and properties of a solid-phase ligand library made of hexapeptides (combinatorial peptide ligand libraries or CPLL, for capturing the "hidden proteome", i.e. the low- and very low-abundance proteins Constituting the vast majority of species in any proteome. as applied to plant tissues, are reviewed here. Plant tissues are notoriously recalcitrant to protein extraction and to proteome analysis, Firstly, rigid plant cell walls need to be mechanically disrupted to release the cell content and, in addition to their poor protein yield, plant tissues are rich in proteases and oxidative enzymes, contain phenolic Compounds, starches, oils, pigments and secondary metabolites that massively contaminate protein extracts. In addition, complex matrices of polysaccharides, including large amount of anionic pectins, are present. All these species compete with the binding of proteins to the CPLL beads, impeding proper capture and identification I detection of low-abundance species. When properly pre-treated, plant tissue extracts are amenable to capture by the CPLL beads revealing thus many new species among them low-abundance proteins. Examples are given on the treatment of leaf proteins, of corn seed extracts and of exudate proteins (latex from Hevea brasiliensis). In all cases, the detection of unique gene products via CPLL Capture is at least twice that of control, untreated sample. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
Numerous techniques exist which can be used for the task of behavioural analysis and recognition. Common amongst these are Bayesian networks and Hidden Markov Models. Although these techniques are extremely powerful and well developed, both have important limitations. By fusing these techniques together to form Bayes-Markov chains, the advantages of both techniques can be preserved, while reducing their limitations. The Bayes-Markov technique forms the basis of a common, flexible framework for supplementing Markov chains with additional features. This results in improved user output, and aids in the rapid development of flexible and efficient behaviour recognition systems.
Resumo:
The charging of interest for borrowing money, and the level at which it is charged, is of fundamental importance to the economy. Unfortunately, the study of the interest rates charged in the middle ages has been hampered by the diversity of terms and methods used by historians. This article seeks to establish a standardized methodology to calculate interest rates from historical sources and thereby provide a firmer foundation for comparisons between regions and periods. It should also contribute towards the current historical reassessment of medieval economic and financial development. The article is illustrated with case studies drawn from the credit arrangements of the English kings between 1272 and c.1340, and argues that changes in interest rates reflect, in part, contemporary perceptions of the creditworthiness of the English crown.
Resumo:
Chatterbox Challenge is an annual web-based contest for artificial conversational systems, ACE. The 2010 instantiation was the tenth consecutive contest held between March and June in the 60th year following the publication of Alan Turing’s influential disquisition ‘computing machinery and intelligence’. Loosely based on Turing’s viva voca interrogator-hidden witness imitation game, a thought experiment to ascertain a machine’s capacity to respond satisfactorily to unrestricted questions, the contest provides a platform for technology comparison and evaluation. This paper provides an insight into emotion content in the entries since the 2005 Chatterbox Challenge. The authors find that synthetic textual systems, none of which are backed by academic or industry funding, are, on the whole and more than half a century since Weizenbaum’s natural language understanding experiment, little further than Eliza in terms of expressing emotion in dialogue. This may be a failure on the part of the academic AI community for ignoring the Turing test as an engineering challenge.
Resumo:
There is under-representation of senior female managers within small construction firms in the United Kingdom. The position is denying the sector a valuable pool of labour to address acute knowledge and skill shortages. Grounded theory on the career progression of senior female managers in these firms is developed from biographical interviews. First, a turning point model which distinguishes the interplay between human agency and work/home structure is given. Second, four career development phases are identified. The career journeys are characterized by ad hoc decisions and opportunities which were not influenced by external policies aimed at improving the representation of women in construction. Third, the 'hidden', but potentially significant, contribution of women-owned small construction firms is noted. The key challenge for policy and practice is to balance these external approaches with recognition of the 'inside out' reality of the 'lived experiences' of female managers. To progress this agenda there is a need for: appropriate longitudinal statistical data to quantify the scale of senior female managers and owners of small construction firms over time; and, social construction and gendered organizational analysis research to develop a general discourse on gender difference with these firms.
Resumo:
In the last few years a state-space formulation has been introduced into self-tuning control. This has not only allowed for a wider choice of possible control actions, but has also provided an insight into the theory underlying—and hidden by—that used in the polynomial description. This paper considers many of the self-tuning algorithms, both state-space and polynomial, presently in use, and by starting from first principles develops the observers which are, effectively, used in each case. At any specific time instant the state estimator can be regarded as taking one of two forms. In the first case the most recently available output measurement is excluded, and here an optimal and conditionally stable observer is obtained. In the second case the present output signal is included, and here it is shown that although the observer is once again conditionally stable, it is no longer optimal. This result is of significance, as many of the popular self-tuning controllers lie in the second, rather than first, category.
Resumo:
Howard Barker is a writer who has made several notable excursions into what he calls ‘the charnel house…of European drama.’ David Ian Rabey has observed that a compelling property of these classical works lies in what he calls ‘the incompleteness of [their] prescriptions’, and Barker’s Women Beware Women (1986), Seven Lears (1990) and Gertrude: The Cry (2002), are in turn based around the gaps and interstices found in Thomas Middleton’s Women Beware Women (c1627), Shakespeare’s King Lear (c1604) and Hamlet (c1601) respectively. This extends from representing the missing queen from King Lear, who Barker observes, ‘is barely quoted even in the depths of rage or pity’, to his new ending for Middleton’s Jacobean tragedy and the erotic revivification of Hamlet’s mother. This paper will argue that each modern reappropriation accentuates a hidden but powerful feature in these Elizabethan and Jacobean plays – namely their clash between obsessive desire, sexual transgression and death against the imposed restitution of a prescribed morality. This contradiction acts as the basis for Barker’s own explorations of eroticism, death and tragedy. The paper will also discuss Barker’s project for these ‘antique texts’, one that goes beyond what he derisively calls ‘relevance’, but attempts instead to recover ‘smothered genius’, whereby the transgressive is ‘concealed within structures that lend an artificial elegance.’ Together with Barker’s own rediscovery of tragedy, the paper will assert that these rewritings of Elizabethan and Jacobean drama expose their hidden, yet unsettling and provocative ideologies concerning the relationship between political corruption / justice through the power of sexuality (notably through the allure and danger of the mature woman), and an erotics of death that produces tragedy for the contemporary age.