977 resultados para analytical approaches
Resumo:
The present diploma thesis analyses the German political understanding of social inequalities in health (SIH) among children and adolescents, and explores the political strategies that are perceived as most effective to tackle SIH. The study is based on the qualitative content analysis of official political documents developed at different political levels, which were the national level as well as two purposefully selected counties, Mecklenburg-Vorpommern and Niedersachsen. The study's findings indicate a beginning awareness of the existence of SIH in Germany. Nevertheless, this judgement refers to few publishing ministries only, both at national and county levels. The suggested approaches to tackle SIH vary significantly among the analysed documents, and no consensus can be identified with regard to the preference of upstream or downstream policies. The existence of the social gradient is not criticised in any of the analysed data. However, there seems to be a common agreement on the importance of setting related interventions and the contribution of both the national, regional, and local politic levels. As the absence of a central coordinator can explain these highly heterogeneous findings, key recommendations concern the establishment of a nation-wide coordinator and a nation-wide collection of best practice examples. Here, the Federal Centre for Health Education has an adequate position and the required competences to act as a coordinator and facilitator. Further requirements for a successful reduction of SIH in Germany are the extension of a continuous communication between all actors, the adoption of the planned German Prevention Law, and the nation-wide and early promotion of children as part of education policies in the federal states.
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2012
Resumo:
Magdeburg, Univ., Fak. für Mathematik, Diss., 2013
Resumo:
We analyze the classical Bertrand model when consumers exhibit some strategic behavior in deciding from which seller they will buy. We use two related but different tools. Both consider a probabilistic learning (or evolutionary) mechanism, and in the two of them consumers' behavior in uences the competition between the sellers. The results obtained show that, in general, developing some sort of loyalty is a good strategy for the buyers as it works in their best interest. First, we consider a learning procedure described by a deterministic dynamic system and, using strong simplifying assumptions, we can produce a description of the process behavior. Second, we use nite automata to represent the strategies played by the agents and an adaptive process based on genetic algorithms to simulate the stochastic process of learning. By doing so we can relax some of the strong assumptions used in the rst approach and still obtain the same basic results. It is suggested that the limitations of the rst approach (analytical) provide a good motivation for the second approach (Agent-Based). Indeed, although both approaches address the same problem, the use of Agent-Based computational techniques allows us to relax hypothesis and overcome the limitations of the analytical approach.
Resumo:
We review recent likelihood-based approaches to modeling demand for medical care. A semi-nonparametric model along the lines of Cameron and Johansson's Poisson polynomial model, but using a negative binomial baseline model, is introduced. We apply these models, as well a semiparametric Poisson, hurdle semiparametric Poisson, and finite mixtures of negative binomial models to six measures of health care usage taken from the Medical Expenditure Panel survey. We conclude that most of the models lead to statistically similar results, both in terms of information criteria and conditional and unconditional prediction. This suggests that applied researchers may not need to be overly concerned with the choice of which of these models they use to analyze data on health care demand.
Resumo:
The potential and applicability of UHPSFC-MS/MS for anti-doping screening in urine samples were tested for the first time. For this purpose, a group of 110 doping agents with diverse physicochemical properties was analyzed using two separation techniques, namely UHPLC-MS/MS and UHPSFC-MS/MS in both ESI+ and ESI- modes. The two approaches were compared in terms of selectivity, sensitivity, linearity and matrix effects. As expected, very diverse retentions and selectivities were obtained in UHPLC and UHPSFC, proving a good complementarity of these analytical strategies. In both conditions, acceptable peak shapes and MS detection capabilities were obtained within 7min analysis time, enabling the application of these two methods for screening purposes. Method sensitivity was found comparable for 46% of tested compounds, while higher sensitivity was observed for 21% of tested compounds in UHPLC-MS/MS and for 32% in UHPSFC-MS/MS. The latter demonstrated a lower susceptibility to matrix effects, which were mostly observed as signal suppression. In the case of UHPLC-MS/MS, more serious matrix effects were observed, leading typically to signal enhancement and the matrix effect was also concentration dependent, i.e., more significant matrix effects occurred at the lowest concentrations.
Resumo:
Drug abuse is a widespread problem affecting both teenagers and adults. Nitrous oxide is becoming increasingly popular as an inhalation drug, causing harmful neurological and hematological effects. Some gas chromatography-mass spectrometry (GC-MS) methods for nitrous oxide measurement have been previously described. The main drawbacks of these methods include a lack of sensitivity for forensic applications; including an inability to quantitatively determine the concentration of gas present. The following study provides a validated method using HS-GC-MS which incorporates hydrogen sulfide as a suitable internal standard allowing the quantification of nitrous oxide. Upon analysis, sample and internal standard have similar retention times and are eluted quickly from the molecular sieve 5Å PLOT capillary column and the Porabond Q column therefore providing rapid data collection whilst preserving well defined peaks. After validation, the method has been applied to a real case of N2O intoxication indicating concentrations in a mono-intoxication.
Resumo:
This work investigates applying introspective reasoning to improve the performance of Case-Based Reasoning (CBR) systems, in both reactive and proactive fashion, by guiding learning to improve how a CBR system applies its cases and by identifying possible future system deficiencies. First we present our reactive approach, a new introspective reasoning model which enables CBR systems to autonomously learn to improve multiple facets of their reasoning processes in response to poor quality solutions. We illustrate our model’s benefits with experimental results from tests in an industrial design application. Then as for our proactive approach, we introduce a novel method for identifying regions in a case-base where the system gives low confidence solutions to possible future problems. Experimentation is provided for Zoology and Robo-Soccer domains and we argue how encountered regions of dubiosity help us to analyze the case-bases of a given CBR system.
Resumo:
Report for the scientific sojourn at the Swiss Federal Institute of Technology Zurich, Switzerland, between September and December 2007. In order to make robots useful assistants for our everyday life, the ability to learn and recognize objects is of essential importance. However, object recognition in real scenes is one of the most challenging problems in computer vision, as it is necessary to deal with difficulties. Furthermore, in mobile robotics a new challenge is added to the list: computational complexity. In a dynamic world, information about the objects in the scene can become obsolete before it is ready to be used if the detection algorithm is not fast enough. Two recent object recognition techniques have achieved notable results: the constellation approach proposed by Lowe and the bag of words approach proposed by Nistér and Stewénius. The Lowe constellation approach is the one currently being used in the robot localization project of the COGNIRON project. This report is divided in two main sections. The first section is devoted to briefly review the currently used object recognition system, the Lowe approach, and bring to light the drawbacks found for object recognition in the context of indoor mobile robot navigation. Additionally the proposed improvements for the algorithm are described. In the second section the alternative bag of words method is reviewed, as well as several experiments conducted to evaluate its performance with our own object databases. Furthermore, some modifications to the original algorithm to make it suitable for object detection in unsegmented images are proposed.
Resumo:
We ask whether MNEs’ experience of institutional quality and political risk within their “home” business environments influences their decisions to enter a given country. We set out an explicit theoretical model that allows for the possibility that firms from South source countries may, by virtue of their experience with poor institutional quality, derive a competitive advantage over firms from North countries with respect to investing in destinations in the South. We show that the experience gained by such MNEs of poorer institutional environments may result in their being more prepared to invest in other countries with correspondingly weak institutions.
Resumo:
The monetary policy reaction function of the Bank of England is estimated by the standard GMM approach and the ex-ante forecast method developed by Goodhart (2005), with particular attention to the horizons for inflation and output at which each approach gives the best fit. The horizons for the ex-ante approach are much closer to what is implied by the Bank’s view of the transmission mechanism, while the GMM approach produces an implausibly slow adjustment of the interest rate, and suffers from a weak instruments problem. These findings suggest a strong preference for the ex-ante approach.
Resumo:
The monetary policy reaction function of the Bank of England is estimated by the standard GMM approach and the ex-ante forecast method developed by Goodhart (2005), with particular attention to the horizons for inflation and output at which each approach gives the best fit. The horizons for the ex-ante approach are much closer to what is implied by the Bank’s view of the transmission mechanism, while the GMM approach produces an implausibly slow adjustment of the interest rate, and suffers from a weak instruments problem. These findings suggest a strong preference for the ex-ante approach.