23 resultados para Scientific Research Programme


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

ABSTRACT: BACKGROUND: A central question for ecologists is the extent to which anthropogenic disturbances (e.g. tourism) might impact wildlife and affect the systems under study. From a research perspective, identifying the effects of human disturbance caused by research-related activities is crucial in order to understand and account for potential biases and derive appropriate conclusions from the data. RESULTS: Here, we document a case of biological adjustment to chronic human disturbance in a colonial seabird, the king penguin (Aptenodytes patagonicus), breeding on remote and protected islands of the Southern ocean. Using heart rate (HR) as a measure of the stress response, we show that, in a colony with areas exposed to the continuous presence of humans (including scientists) for over 50 years, penguins have adjusted to human disturbance and habituated to certain, but not all, types of stressors. When compared to birds breeding in relatively undisturbed areas, birds in areas of high chronic human disturbance were found to exhibit attenuated HR responses to acute anthropogenic stressors of low-intensity (i.e. sounds or human approaches) to which they had been subjected intensely over the years. However, such attenuation was not apparent for high-intensity stressors (i.e. captures for scientific research) which only a few individuals experience each year. CONCLUSIONS: Habituation to anthropogenic sounds/approaches could be an adaptation to deal with chronic innocuous stressors, and beneficial from a research perspective. Alternately, whether penguins have actually habituated to anthropogenic disturbances over time or whether human presence has driven the directional selection of human-tolerant phenotypes, remains an open question with profound ecological and conservation implications, and emphasizes the need for more knowledge on the effects of human disturbance on long-term studied populations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

More than 80 % of vascular plants in the world form symbioses with arbuscular mycorrhizal fungi (AMF). AMF supply plants with nutrients such as phosphate and nitrogen, and can also help the plants to take up water. Hence, the symbiosis can greatly influence the growth and the defence of plants. By modifying plant productivity and diversity, AMF are considered as keystone species in ecosystems, playing a role that ultimately affects many food webs. This is why mycorrhizal symbioses have been investigated for several decades by many research groups.¦However, a large part of the scientific research done on AMF symbiosis has focused on the interaction between one plant and one fungus. This situation is far from realistic, as in natural ecosystems, many different fungal strains and species are co-existing and interacting in a belowground network. The main goal of this PhD was to investigate first, the interaction occurring among different co-existing AMF depending on their genetic relatedness and second, the outcome of the interaction and their effects on associated species.¦We found that AMF genetic relatedness partly explains the interaction among AMF, and this was in agreement with theories made for completely different species. Briefly, we demonstrated that AMF isolates of the same species coexisted more easily when they were closely-related, whereas AMF from different species were more in competition in this case of high relatedness. We also demonstrated that coexistence and competition among AMF can mediate plant growth as well as herbivore behaviour, opening new insights in our understanding of AMF effects on ecosystem functioning.¦Overall, the results of the different experiments of this PhD highlight the necessity of using multiple AMF to understand their interactions. Even so, we demonstrated here that simple species richness is not enough to understand these interactions and genetic relatedness among the co-existing AMF is a parameter that must be taken into account.¦-¦Sur Terre, plus de 80 % des plantes vasculaires forment des symbioses avec des champignons endomycorhiziens à arbuscules (CEA). Ces CEA permettent aux plantes d'acquérir plus facilement des nutriments tels que des phosphates, des nitrates, ou simplement de l'eau. Ainsi, cette symbiose peut avoir un effet important à la fois sur la croissance mais aussi sur la défense des plantes. En modulant la productivité et la diversité des plantes, les CEA sont donc des espèces clefs dans l'écosystème. Leur présence peut avoir des répercussions sur l'ensemble des réseaux trophiques. C'est pourquoi de nombreuses équipes de recherches étudient ces symbioses mycorhizienes depuis plusieurs décennies.¦La plupart des études concernant ces symbioses se sont focalisées sur l'action d'une espèce de CEA sur une espèce de plante. Malheureusement, cette situation ne correspond pas à ce que l'on peut retrouver dans la nature, où de nombreuses souches et de nombreuses espèces de CEA coexistent et interagissent dans un réseau mycélien souterrain. Le principal but de cette thèse était d'étudier, premièrement les interactions entre les différent CEA en fonction de leur apparentement génétique, et deuxièmement, d'étudier l'effet de ces interactions fongiques sur l'écologie des espèces associées.¦Au cours des différentes expériences de cette thèse, nous avons démontré que l'apparentement génétique entre les CEA expliquait une part non négligeable de leurs interactions. En résumé, plus l'apparentement génétique entre des souches de CEA d'une même espèce sera grand, plus ces souches seront capables de coexister. En revanche, s'il s'agit d'espèces différentes de CEA, plus elles seront apparentées, plus la compétition sera grande entre elles. Nous avons également démontré que la coexistence et la compétition entre différents CEA peut modifier à la fois la croissance des plantes mais aussi le comportement de leur prédateurs, ce qui ouvre de nouvelles perspectives sur notre compréhension des effets des CEA dans le fonctionnement des écosystèmes.¦Globalement, les résultats de nos différentes expériences mettent en évidence la nécessité d'utiliser plusieurs souches ou espèces de CEA pour mieux comprendre leurs interactions. Quand bien même, nos expériences démontrent que le simple recensement du nombre d'espèces de CEA n'est pas suffisant pour comprendre les interactions et que l'apparentement génétique des CEA coexistants est un paramètre qui doit être pris en compte.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traditionally, Live High-Train High (LHTH) interventions were adopted when athletes trained and lived at altitude to try maximising the benefits offered by hypoxic exposure and improving sea level performance. Nevertheless, scientific research has proposed that the possible benefits of hypoxia would be offset by the inability to maintain high training intensity at altitude. However, elite athletes have been rarely recruited as an experimental sample, and training intensity has almost never been monitored during altitude research. This case study is an attempt to provide a practical example of successful LHTH interventions in two Olympic gold medal athletes. Training diaries were collected and total training volumes, volumes at different intensities, and sea level performance recorded before, during and after a 3-week LHTH camp. Both athletes successfully completed the LHTH camp (2090 m) maintaining similar absolute training intensity and training volume at high-intensity (> 91% of race pace) compared to sea level. After the LHTH intervention both athletes obtained enhancements in performance and they won an Olympic gold medal. In our opinion, LHTH interventions can be used as a simple, yet effective, method to maintain absolute, and improve relative training intensity in elite endurance athletes. Key PointsElite endurance athletes, with extensive altitude training experience, can maintain similar absolute intensity during LHTH compared to sea level.LHTH may be considered as an effective method to increase relative training intensity while maintaining the same running/walking pace, with possible beneficial effects on sea level performance.Training intensity could be the key factor for successful high-level LHTH camp.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Analyzing the type and frequency of patient-specific mutations that give rise to Duchenne muscular dystrophy (DMD) is an invaluable tool for diagnostics, basic scientific research, trial planning, and improved clinical care. Locus-specific databases allow for the collection, organization, storage, and analysis of genetic variants of disease. Here, we describe the development and analysis of the TREAT-NMD DMD Global database (http://umd.be/TREAT_DMD/). We analyzed genetic data for 7,149 DMD mutations held within the database. A total of 5,682 large mutations were observed (80% of total mutations), of which 4,894 (86%) were deletions (1 exon or larger) and 784 (14%) were duplications (1 exon or larger). There were 1,445 small mutations (smaller than 1 exon, 20% of all mutations), of which 358 (25%) were small deletions and 132 (9%) small insertions and 199 (14%) affected the splice sites. Point mutations totalled 756 (52% of small mutations) with 726 (50%) nonsense mutations and 30 (2%) missense mutations. Finally, 22 (0.3%) mid-intronic mutations were observed. In addition, mutations were identified within the database that would potentially benefit from novel genetic therapies for DMD including stop codon read-through therapies (10% of total mutations) and exon skipping therapy (80% of deletions and 55% of total mutations).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

AIM: In the past few years, spectacular progress in neuroscience has led to the emergence of a new interdisciplinary field, the so-called "neurolaw" whose goal is to explore the effects of neuroscientific discoveries on legal proceedings and legal rules and standards. In the United States, a number of neuroscientific researches are designed specifically to explore legally relevant topics and a case-law has already been developed. In Europe, neuroscientific evidence is increasingly being used in criminal courtrooms, as part of psychiatric testimony, nourishing the debate about the legal implications of brain research in psychiatric-legal settings. Though largely debated, up to now the use of neuroscience in legal contexts had not specifically been regulated by any legislation. In 2011, with the new bioethics law, France has become the first country to admit by law the use of brain imaging in judicial expertise. According to the new law, brain imaging techniques can be used only for medical purposes, or scientific research, or in the context of judicial expertise. This study aims to give an overview of the current state of the neurolaw in the US and Europe, and to investigate the ethical issues raised by this new law and its potential impact on the rights and civil liberties of the offenders. METHOD: An overview of the emergence and development of "neurolaw" in the United States and Europe is given. Then, the new French law is examined in the light of the relevant debates in the French parliament. Consequently, we outline the current tendencies in Neurolaw literature to focus on assessments of responsibility, rather than dangerousness. This tendency is analysed notably in relation to the legal context relevant to criminal policies in France, where recent changes in the legislation and practice of forensic psychiatry show that dangerousness assessments have become paramount in the process of judicial decision. Finally, the potential interpretations of neuroscientific data introduced into psychiatric testimonies by judges are explored. RESULTS: The examination of parliamentary debates showed that the new French law allowing neuroimaging techniques in judicial expertise was introduced in the aim to provide a legal framework that would protect the subject against potential misuses of neuroscience. The underlying fear above all, was that this technology be used as a lie detector, or as a means to predict the subject's behaviour. However, the possibility of such misuse remains open. Contrary to the legislator's wish, the defendant is not fully guaranteed against uses of neuroimaging techniques in criminal courts that would go against their interests and rights. In fact, the examination of the recently adopted legislation in France shows that assessments of dangerousness and of risk of recidivism have become central elements of the criminal policy, which makes it possible, if not likely that neuroimaging techniques be used for the evaluation of the dangerousness of the defendant. This could entail risks for the latter, as judges could perceive neuroscientific data as hard evidence, more scientific and reliable than the soft data of traditional psychiatry. If such neuroscientific data are interpreted as signs of potential dangerousness of a subject rather than as signs of criminal responsibility, defendants may become subjected to longer penalties or measures aiming to ensure public safety in the detriment of their freedom. CONCLUSION: In the current context of accentuated societal need for security, the judge and the expert-psychiatrist are increasingly asked to evaluate the dangerousness of a subject, regardless of their responsibility. Influenced by this policy model, the judge might tend to use neuroscientific data introduced by an expert as signs of dangerousness. Such uses, especially when they subjugate an individual's interest to those of society, might entail serious threats to an individual's freedom and civil liberties.