890 resultados para theory-in-use
Resumo:
Escherichia coli, Klebsiella pneumoniae, and Enterobacter spp. are a major cause of infections in hospitalised patients. The aim of our study was to evaluate rates and trends of resistance to third-generation cephalosporins and fluoroquinolones in infected patients, the trends in use for these antimicrobials, and to assess the potential correlation between both trends. The database of national point prevalence study series of infections and antimicrobial use among patients hospitalised in Spain over the period from 1999 to 2010 was analysed. On average 265 hospitals and 60,000 patients were surveyed per year yielding a total of 19,801 E. coli, 3,004 K. pneumoniae and 3,205 Enterobacter isolates. During the twelve years period, we observed significant increases for the use of fluoroquinolones (5.8%-10.2%, p<0.001), but not for third-generation cephalosporins (6.4%-5.9%, p=NS). Resistance to third-generation cephalosporins increased significantly for E. coli (5%-15%, p<0.01) and for K. pneumoniae infections (4%-21%, p<0.01) but not for Enterobacter spp. (24%). Resistance to fluoroquinolones increased significantly for E. coli (16%30%, p<0.01), for K. pneumoniae (5%-22%, p<0.01), and for Enterobacter spp. (6%-15%, p<0.01). We found strong correlations between the rate of fluoroquinolone use and the resistance to fluoroquinolones, third-generation cephalosporins, or co-resistance to both, for E. coli (R=0.97, p<0.01, R=0.94, p<0.01, and R=0.96, p<0.01, respectively), and for K. pneumoniae (R=0.92, p<0.01, R=0.91, p<0.01, and R=0.92, p<0.01, respectively). No correlation could be found between the use of third-generation cephalosporins and resistance to any of the latter antimicrobials. No significant correlations could be found for Enterobacter spp.. Knowledge of the trends in antimicrobial resistance and use of antimicrobials in the hospitalised population at the national level can help to develop prevention strategies.
Resumo:
The existence of a liquid-gas phase transition for hot nuclear systems at subsaturation densities is a well-established prediction of finite-temperature nuclear many-body theory. In this paper, we discuss for the first time the properties of such a phase transition for homogeneous nuclear matter within the self-consistent Green's function approach. We find a substantial decrease of the critical temperature with respect to the Brueckner-Hartree-Fock approximation. Even within the same approximation, the use of two different realistic nucleon-nucleon interactions gives rise to large differences in the properties of the critical point.
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
BACKGROUND: Antidepressants are one of the most commonly prescribed drugs in primary care. The rise in use is mostly due to an increasing number of long-term users of antidepressants (LTU AD). Little is known about the factors driving increased long-term use. We examined the socio-demographic, clinical factors and health service use characteristics associated with LTU AD to extend our understanding of the factors that may be driving the increase in antidepressant use. METHODS: Cross-sectional analysis of 789 participants with probable depression (CES-D≥16) recruited from 30 randomly selected Australian general practices to take part in a ten-year cohort study about depression were surveyed about their antidepressant use. RESULTS: 165 (21.0%) participants reported <2 years of antidepressant use and 145 (18.4%) reported ≥2 years of antidepressant use. After adjusting for depression severity, LTU AD was associated with: single (OR 1.56, 95%CI 1.05-2.32) or recurrent episode of depression (3.44, 2.06-5.74); using SSRIs (3.85, 2.03-7.33), sedatives (2.04, 1.29-3.22), or antipsychotics (4.51, 1.67-12.17); functional limitations due to long-term illness (2.81, 1.55-5.08), poor/fair self-rated health (1.57, 1.14-2.15), inability to work (2.49, 1.37-4.53), benefits as main source of income (2.15, 1.33-3.49), GP visits longer than 20min (1.79, 1.17-2.73); rating GP visits as moderately to extremely helpful (2.71, 1.79-4.11), and more self-help practices (1.16, 1.09-1.23). LIMITATIONS: All measures were self-report. Sample may not be representative of culturally different or adolescent populations. Cross-sectional design raises possibility of "confounding by indication". CONCLUSIONS: Long-term antidepressant use is relatively common in primary care. It occurs within the context of complex mental, physical and social morbidities. Whilst most long-term use is associated with a history of recurrent depression there remains a significant opportunity for treatment re-evaluation and timely discontinuation.
Resumo:
Les POCT (point of care tests) ont un grand potentiel d'utilisation en médecine infectieuse ambulatoire grâce à leur rapidité d'exécution, leur impact sur l'administration d'antibiotiques et sur le diagnostic de certaines maladies transmissibles. Certains tests sont utilisés depuis plusieurs années (détection de Streptococcus pyogenes lors d'angine, anticorps anti-VIH, antigène urinaire de S. pneumoniae, antigène de Plasmodium falciparum). De nouvelles indications concernent les infections respiratoires, les diarrhées infantiles (rotavirus, E. coli entérohémorragique) et les infections sexuellement transmissibles. Des POCT, basés sur la détection d'acides nucléiques, viennent d'être introduits (streptocoque du groupe B chez la femme enceinte avant l'accouchement et la détection du portage de staphylocoque doré résistant à la méticilline). POCT have a great potential in ambulatory infectious diseases diagnosis, due to their impact on antibiotic administration and on communicable diseases prevention. Some are in use for long (S. pyogenes antigen, HIV antibodies) or short time (S. pneumoniae antigen, P. falciparum). The additional major indications will be community-acquired lower respiratory tract infections, infectious diarrhoea in children (rotavirus, enterotoxigenic E. coli), and hopefully sexually transmitted infections. Easy to use, these tests based on antigen-antibody reaction allow a rapid diagnosis in less than one hour; the new generation of POCT relying on nucleic acid detection are just introduced in practice (detection of GBS in pregnant women, carriage of MRSA), and will be extended to many pathogens
Resumo:
Marine litter is an international environmental problem that causes considerable costs to coastal communities and the fishing industry. Several international and national treaties and regulations have provisions to marine litter and forbid disposal of waste into the sea. However, none of these regulations state a responsibility for public authorities to recover marine litter from the sea, like they do for marine litter that washes up on public beaches. In a financial evaluation of a value chain for marine litter incineration it was found out that the total costs of waste incineration are approximately 100 ─ 200 % higher than waste fees offered by waste contractors of ports. The high costs of incineration are derived from the high calorific value of marine litter and therefore a high incineration cost for the waste, and long distances between ports that are taking part in a project for marine litter recovery from the sea and an Energy-from-Waste (EfW) facility. This study provides a possible solution to diverting marine litter from landfills to more environmentally sustainable EfW use by using a public-private partnership (PPP) framework. PPP would seem to fit as a suitable cooperative approach for answering problems of current marine litter disposal in theory. In the end it is up to the potential partners of this proposed PPP to decide whether the benefits of cooperation justify the required efforts.
Resumo:
This thesis attempts to fill gaps in both a theoretical basis and an operational and strategic understanding in the areas of social ventures, social entrepreneurship and nonprofit business models. This study also attempts to bridge the gap in strategic and economic theory between social and commercial ventures. More specifically, this thesis explores sustainable competitive advantage from a resource-based theory perspective and explores how it may be applied to the nonmarket situation of nonprofit organizations and social ventures. It is proposed that a social value-orientation of sustainable competitive advantage, called sustainable contributive advantage, provides a more realistic depiction of what is necessary in order for a social venture to perform better than its competitors over time. In addition to providing this realistic depiction, this research provides a substantial theoretical contribution in the area of economics, social ventures, and strategy research, specifically in regards to resource-based theory. The proposed model for sustainable contributive advantage uses resource-based theory and competitive advantage in order to be applicable to social ventures. This model proposes an explanation of a social venture’s ability to demonstrate consistently superior performance. In order to determine whether sustainable competitive advantage is in fact, appropriate to apply to both social and economic environments, quantitative analyses are conducted on a large sample of nonprofit organizations in a single industry and then compared to similar quantitative analyses conducted on commercial ventures. In comparing the trends and strategies between the two types of entities from a quantitative perspective, propositions are developed regarding a social venture’s resource utilization strategies and their possible impact on performance. Evidence is found to support the necessity of adjusting existing models in resource-based theory in order to apply them to social ventures. Additionally supported is the proposed theory of sustainable contributive advantage. The thesis concludes with recommendations for practitioners, researchers and policy makers as well as suggestions for future research paths.
Resumo:
The objective of this Master’s thesis is to create a calculation model for working capital management in value chains. The study has been executed using literature review and constructive research methods. Constructive research methods were mainly modeling. The theory in this thesis is founded in research articles and management literature. The model is developed for students and researchers. They can use the model for working capital management and comparing firms to each other. The model can also be used to cash management. The model tells who benefits and who suffers most in the value chain. Companies and value chains cash flows can be seen. By using the model can be seen are the set targets really achieved. The amount of operational working capital can be observed. The model enables user to simulate the amount of working capital. The created model is based on cash conversion cycle, return on investment and cash flow forecasting. The model is tested with carefully considered figures which seem to be though realistic. The modeled value chain is literally a chain. Implementing this model requires from the user that he/she have some kind of understanding about working capital management and some figures from balance sheet and income statement. By using this model users can improve their knowledge about working capital management in value chains.
Resumo:
The shift towards a knowledge-based economy has inevitably prompted the evolution of patent exploitation. Nowadays, patent is more than just a prevention tool for a company to block its competitors from developing rival technologies, but lies at the very heart of its strategy for value creation and is therefore strategically exploited for economic pro t and competitive advantage. Along with the evolution of patent exploitation, the demand for reliable and systematic patent valuation has also reached an unprecedented level. However, most of the quantitative approaches in use to assess patent could arguably fall into four categories and they are based solely on the conventional discounted cash flow analysis, whose usability and reliability in the context of patent valuation are greatly limited by five practical issues: the market illiquidity, the poor data availability, discriminatory cash-flow estimations, and its incapability to account for changing risk and managerial flexibility. This dissertation attempts to overcome these impeding barriers by rationalizing the use of two techniques, namely fuzzy set theory (aiming at the first three issues) and real option analysis (aiming at the last two). It commences with an investigation into the nature of the uncertainties inherent in patent cash flow estimation and claims that two levels of uncertainties must be properly accounted for. Further investigation reveals that both levels of uncertainties fall under the categorization of subjective uncertainty, which differs from objective uncertainty originating from inherent randomness in that uncertainties labelled as subjective are highly related to the behavioural aspects of decision making and are usually witnessed whenever human judgement, evaluation or reasoning is crucial to the system under consideration and there exists a lack of complete knowledge on its variables. Having clarified their nature, the application of fuzzy set theory in modelling patent-related uncertain quantities is effortlessly justified. The application of real option analysis to patent valuation is prompted by the fact that both patent application process and the subsequent patent exploitation (or commercialization) are subject to a wide range of decisions at multiple successive stages. In other words, both patent applicants and patentees are faced with a large variety of courses of action as to how their patent applications and granted patents can be managed. Since they have the right to run their projects actively, this flexibility has value and thus must be properly accounted for. Accordingly, an explicit identification of the types of managerial flexibility inherent in patent-related decision making problems and in patent valuation, and a discussion on how they could be interpreted in terms of real options are provided in this dissertation. Additionally, the use of the proposed techniques in practical applications is demonstrated by three fuzzy real option analysis based models. In particular, the pay-of method and the extended fuzzy Black-Scholes model are employed to investigate the profitability of a patent application project for a new process for the preparation of a gypsum-fibre composite and to justify the subsequent patent commercialization decision, respectively; a fuzzy binomial model is designed to reveal the economic potential of a patent licensing opportunity.
Resumo:
The increased awareness and evolved consumer habits have set more demanding standards for the quality and safety control of food products. The production of foodstuffs which fulfill these standards can be hampered by different low-molecular weight contaminants. Such compounds can consist of, for example residues of antibiotics in animal use or mycotoxins. The extremely small size of the compounds has hindered the development of analytical methods suitable for routine use, and the methods currently in use require expensive instrumentation and qualified personnel to operate them. There is a need for new, cost-efficient and simple assay concepts which can be used for field testing and are capable of processing large sample quantities rapidly. Immunoassays have been considered as the golden standard for such rapid on-site screening methods. The introduction of directed antibody engineering and in vitro display technologies has facilitated the development of novel antibody based methods for the detection of low-molecular weight food contaminants. The primary aim of this study was to generate and engineer antibodies against low-molecular weight compounds found in various foodstuffs. The three antigen groups selected as targets of antibody development cause food safety and quality defects in wide range of products: 1) fluoroquinolones: a family of synthetic broad-spectrum antibacterial drugs used to treat wide range of human and animal infections, 2) deoxynivalenol: type B trichothecene mycotoxin, a widely recognized problem for crops and animal feeds globally, and 3) skatole, or 3-methyindole is one of the two compounds responsible for boar taint, found in the meat of monogastric animals. This study describes the generation and engineering of antibodies with versatile binding properties against low-molecular weight food contaminants, and the consecutive development of immunoassays for the detection of the respective compounds.
Resumo:
ABSTRACT In 1979 Nicaragua, under the Sandinistas, experienced a genuine, socialist, full scale, agrarian revolution. This thesis examines whether Jeffery Paige's theory of agrarian revolutions would have been successful in predicting this revolution and ln predicting non-revolution in the neighboring country of Honduras. The thesis begins by setting Paige's theory in the tradition of radical theories of revolution. It then derives four propositions from Paige's theory which suggest the patterns of export crops, land tenure changes and class configurations which are necessary for an agrarian and socialist revolution. These propositions are tested against evidence from the twentieth century histories of economic, social and political change in Nicaragua and Honduras. The thesis concludes that Paige's theory does help to explain the occurrence of agrarian revolution in Nicaragua and non-revolution in Honduras. A fifth proposition derived from Paige's theory proved less useful in explaining the specific areas within Nicaragua that were most receptive to Sandinista revolutionary activity.
Resumo:
La théorie de l'information quantique étudie les limites fondamentales qu'imposent les lois de la physique sur les tâches de traitement de données comme la compression et la transmission de données sur un canal bruité. Cette thèse présente des techniques générales permettant de résoudre plusieurs problèmes fondamentaux de la théorie de l'information quantique dans un seul et même cadre. Le théorème central de cette thèse énonce l'existence d'un protocole permettant de transmettre des données quantiques que le receveur connaît déjà partiellement à l'aide d'une seule utilisation d'un canal quantique bruité. Ce théorème a de plus comme corollaires immédiats plusieurs théorèmes centraux de la théorie de l'information quantique. Les chapitres suivants utilisent ce théorème pour prouver l'existence de nouveaux protocoles pour deux autres types de canaux quantiques, soit les canaux de diffusion quantiques et les canaux quantiques avec information supplémentaire fournie au transmetteur. Ces protocoles traitent aussi de la transmission de données quantiques partiellement connues du receveur à l'aide d'une seule utilisation du canal, et ont comme corollaires des versions asymptotiques avec et sans intrication auxiliaire. Les versions asymptotiques avec intrication auxiliaire peuvent, dans les deux cas, être considérées comme des versions quantiques des meilleurs théorèmes de codage connus pour les versions classiques de ces problèmes. Le dernier chapitre traite d'un phénomène purement quantique appelé verrouillage: il est possible d'encoder un message classique dans un état quantique de sorte qu'en lui enlevant un sous-système de taille logarithmique par rapport à sa taille totale, on puisse s'assurer qu'aucune mesure ne puisse avoir de corrélation significative avec le message. Le message se trouve donc «verrouillé» par une clé de taille logarithmique. Cette thèse présente le premier protocole de verrouillage dont le critère de succès est que la distance trace entre la distribution jointe du message et du résultat de la mesure et le produit de leur marginales soit suffisamment petite.
Resumo:
Dans cette thèse l’ancienne question philosophique “tout événement a-t-il une cause ?” sera examinée à la lumière de la mécanique quantique et de la théorie des probabilités. Aussi bien en physique qu’en philosophie des sciences la position orthodoxe maintient que le monde physique est indéterministe. Au niveau fondamental de la réalité physique – au niveau quantique – les événements se passeraient sans causes, mais par chance, par hasard ‘irréductible’. Le théorème physique le plus précis qui mène à cette conclusion est le théorème de Bell. Ici les prémisses de ce théorème seront réexaminées. Il sera rappelé que d’autres solutions au théorème que l’indéterminisme sont envisageables, dont certaines sont connues mais négligées, comme le ‘superdéterminisme’. Mais il sera argué que d’autres solutions compatibles avec le déterminisme existent, notamment en étudiant des systèmes physiques modèles. Une des conclusions générales de cette thèse est que l’interprétation du théorème de Bell et de la mécanique quantique dépend crucialement des prémisses philosophiques desquelles on part. Par exemple, au sein de la vision d’un Spinoza, le monde quantique peut bien être compris comme étant déterministe. Mais il est argué qu’aussi un déterminisme nettement moins radical que celui de Spinoza n’est pas éliminé par les expériences physiques. Si cela est vrai, le débat ‘déterminisme – indéterminisme’ n’est pas décidé au laboratoire : il reste philosophique et ouvert – contrairement à ce que l’on pense souvent. Dans la deuxième partie de cette thèse un modèle pour l’interprétation de la probabilité sera proposé. Une étude conceptuelle de la notion de probabilité indique que l’hypothèse du déterminisme aide à mieux comprendre ce que c’est qu’un ‘système probabiliste’. Il semble que le déterminisme peut répondre à certaines questions pour lesquelles l’indéterminisme n’a pas de réponses. Pour cette raison nous conclurons que la conjecture de Laplace – à savoir que la théorie des probabilités présuppose une réalité déterministe sous-jacente – garde toute sa légitimité. Dans cette thèse aussi bien les méthodes de la philosophie que de la physique seront utilisées. Il apparaît que les deux domaines sont ici solidement reliés, et qu’ils offrent un vaste potentiel de fertilisation croisée – donc bidirectionnelle.
Resumo:
Extensive use of the Internet coupled with the marvelous growth in e-commerce and m-commerce has created a huge demand for information security. The Secure Socket Layer (SSL) protocol is the most widely used security protocol in the Internet which meets this demand. It provides protection against eaves droppings, tampering and forgery. The cryptographic algorithms RC4 and HMAC have been in use for achieving security services like confidentiality and authentication in the SSL. But recent attacks against RC4 and HMAC have raised questions in the confidence on these algorithms. Hence two novel cryptographic algorithms MAJE4 and MACJER-320 have been proposed as substitutes for them. The focus of this work is to demonstrate the performance of these new algorithms and suggest them as dependable alternatives to satisfy the need of security services in SSL. The performance evaluation has been done by using practical implementation method.
Resumo:
In the present thesis we have formulated the Dalgarno-Lewis procedure for two-and three-photon processes and an elegant alternate expressions are derived. Starting from a brief review on various multiphoton processes we have discussed the difficulties coming in the perturbative treatment of multiphoton processes. A small discussion on various available methods for studying multiphoton processes are presented in chapter 2. These theoretical treatments mainly concentrate on the evaluation of the higher order matrix elements coming in the perturbation theory. In chapter 3 we have described the use of Dalgarno-Lewis procedure and its implimentation on second order matrix elements. The analytical expressions for twophoton transition amplitude, two-photon ionization cross section, dipole dynamic polarizability and Kramers-Heiseberg are obtained in a unified manner. Fourth chapter is an extension of the implicit summation technique presented in chapter 3. We have clearly mentioned the advantage of our method, especially the analytical continuation of the relevant expressions suited for various values of radiation frequency which is also used for efficient numerical analysis. A possible extension of the work is to study various multiphoton processcs from the stark shifted first excited states of hydrogen atom. We can also extend this procedure for studying multiphoton processes in alkali atoms as well as Rydberg atoms. Also, instead of going for analytical expressions, one can try a complete numerical evaluation of the higher order matrix elements using this procedure.