32 resultados para Theoretical framework
em Université de Lausanne, Switzerland
Resumo:
Coexisting workloads from professional, household and family, and caregiving activities for frail parents expose middle-aged individuals, the so-called "Sandwich Generation", to potential health risks. Current trends suggest that this situation will continue or increase. Thus SG health promotion has become a nursing concern. Most existing research considers coexisting workloads a priori pathogenic. Most studies have examined the association of one, versus two, of these three activities with health. Few studies have used a nursing perspective. This article presents the development of a framework based on a nursing model. We integrated Siegrist's Effort-Reward Imbalance middle-range theory into "Neuman Systems Model". The latter was chosen for its salutogenic orientation, its attention to preventive nursing interventions and the opportunity it provides to simultaneously consider positive and negative perceptions of SG health and SG coexisting workloads. Finally, it facilitated a theoretical identification of health protective factors.
Resumo:
Research question: International and national sport federations as well as their member organisations are key actors within the sport system and have a wide range of relationships outside the sport system (e.g. with the state, sponsors, and the media). They are currently facing major challenges such as growing competition in top-level sports, democratisation of sports with 'sports for all' and sports as the answer to social problems. In this context, professionalising sport organisations seems to be an appropriate strategy to face these challenges and current problems. We define the professionalisation of sport organisations as an organisational process of transformation leading towards organisational rationalisation, efficiency and business-like management. This has led to a profound organisational change, particularly within sport federations, characterised by the strengthening of institutional management (managerialism) and the implementation of efficiency-based management instruments and paid staff. Research methods: The goal of this article is to review the current international literature and establish a global understanding of and theoretical framework for analysing why and how sport organisations professionalise and what consequences this may have. Results and findings: Our multi-level approach based on the social theory of action integrates the current concepts for analysing professionalisation in sport federations. We specify the framework for the following research perspectives: (1) forms, (2) causes and (3) consequences, and discuss the reciprocal relations between sport federations and their member organisations in this context. Implications: Finally, we work out a research agenda and derive general methodological consequences for the investigation of professionalisation processes in sport organisations.
Resumo:
The progressive development of Alzheimer's disease (AD)-related lesions such as neurofibrillary tangles,amyloid deposits and synaptic loss within the cerebral cortex is a main event of brain aging.Recent neuropathologic studies strongly suggested that the clinical diagnosis of dementia depends more on the severity and topography of pathologic changes than on the presence of a qualitative marker. However, several methodological problems such as selection biases, case-control design,density-based measures, and masking effects of concomitant pathologies should be taken into account when interpreting these data. In last years, the use of stereologic counting permitted to define reliably the cognitive impact of AD lesions in the human brain. Unlike fibrillar amyloid deposits that are poorly or not related to the dementia severity, the use of this method documented that total neurofibrillary tangles and neuron numbers in the CA1 field are the best correlates of cognitive deterioration in brain aging. Loss of dendritic spines in neocortical but not hippocampal areas has a modest but independent contribution to dementia. In contrast, the importance of early dendritic and axonal tau-related pathologic changes such as neuropil threads remains doubtful. Despite these progresses, neuronal pathology and synaptic loss in cases with pure AD pathology cannot explain more than 50% of clinical severity. The present review discusses the complex structure/function relationships in brain aging and AD within the theoretical framework of the functional neuropathology of brain aging.
Resumo:
Resume : L'utilisation de l'encre comme indice en sciences forensiques est décrite et encadrée par une littérature abondante, comprenant entre autres deux standards de l'American Society for Testing and Materials (ASTM). La grande majorité de cette littérature se préoccupe de l'analyse des caractéristiques physiques ou chimiques des encres. Les standards ASTM proposent quelques principes de base qui concernent la comparaison et l'interprétation de la valeur d'indice des encres en sciences forensiques. L'étude de cette littérature et plus particulièrement des standards ASTM, en ayant a l'esprit les développements intervenus dans le domaine de l'interprétation de l'indice forensique, montre qu'il existe un potentiel certain pour l'amélioration de l'utilisation de l'indice encre et de son impact dans l'enquête criminelle. Cette thèse propose d'interpréter l'indice encre en se basant sur le cadre défini par le théorème de Bayes. Cette proposition a nécessité le développement d'un système d'assurance qualité pour l'analyse et la comparaison d'échantillons d'encre. Ce système d'assurance qualité tire parti d'un cadre théorique nouvellement défini. La méthodologie qui est proposée dans ce travail a été testée de manière compréhensive, en tirant parti d'un set de données spécialement créer pour l'occasion et d'outils importés de la biométrie. Cette recherche répond de manière convaincante à un problème concret généralement rencontré en sciences forensiques. L'information fournie par le criminaliste, lors de l'examen de traces, est souvent bridée, car celui-ci essaie de répondre à la mauvaise question. L'utilisation d'un cadre théorique explicite qui définit et formalise le goal de l'examen criminaliste, permet de déterminer les besoins technologiques et en matière de données. Le développement de cette technologie et la collection des données pertinentes peut être justifiées économiquement et achevée de manière scientifique. Abstract : The contribution of ink evidence to forensic science is described and supported by an abundant literature and by two standards from the American Society for Testing and Materials (ASTM). The vast majority of the available literature is concerned with the physical and chemical analysis of ink evidence. The relevant ASTM standards mention some principles regarding the comparison of pairs of ink samples and the evaluation of their evidential value. The review of this literature and, more specifically, of the ASTM standards in the light of recent developments in the interpretation of forensic evidence has shown some potential improvements, which would maximise the benefits of the use of ink evidence in forensic science. This thesis proposes to interpret ink evidence using the widely accepted and recommended Bayesian theorem. This proposition has required the development of a new quality assurance process for the analysis and comparison of ink samples, as well as of the definition of a theoretical framework for ink evidence. The proposed technology has been extensively tested using a large dataset of ink samples and state of the art tools, commonly used in biometry. Overall, this research successfully answers to a concrete problem generally encountered in forensic science, where scientists tend to self-limit the usefulness of the information that is present in various types of evidence, by trying to answer to the wrong questions. The declaration of an explicit framework, which defines and formalises their goals and expected contributions to the criminal and civil justice system, enables the determination of their needs in terms of technology and data. The development of this technology and the collection of the data is then justified economically, structured scientifically and can be proceeded efficiently.
Resumo:
Executive Summary The first essay of this dissertation investigates whether greater exchange rate uncertainty (i.e., variation over time in the exchange rate) fosters or depresses the foreign investment of multinational firms. In addition to the direct capital financing it supplies, foreign investment can be a source of valuable technology and know-how, which can have substantial positive effects on a host country's economic growth. Thus, it is critically important for policy makers and central bankers, among others, to understand how multinationals base their investment decisions on the characteristics of foreign exchange markets. In this essay, I first develop a theoretical framework to improve our knowledge regarding how the aggregate level of foreign investment responds to exchange rate uncertainty when an economy consists of many firms, each of which is making decisions. The analysis predicts a U-shaped effect of exchange rate uncertainty on the total level of foreign investment of the economy. That is, the effect is negative for low levels of uncertainty and positive for higher levels of uncertainty. This pattern emerges because the relationship between exchange rate volatility and 'the probability of investment is negative for firms with low productivity at home (i.e., firms that find it profitable to invest abroad) and the relationship is positive for firms with high productivity at home (i.e., firms that prefer exporting their product). This finding stands in sharp contrast to predictions in the existing literature that consider a single firm's decision to invest in a unique project. The main contribution of this research is to show that the aggregation over many firms produces a U-shaped pattern between exchange rate uncertainty and the probability of investment. Using data from industrialized countries for the period of 1982-2002, this essay offers a comprehensive empirical analysis that provides evidence in support of the theoretical prediction. In the second essay, I aim to explain the time variation in sovereign credit risk, which captures the risk that a government may be unable to repay its debt. The importance of correctly evaluating such a risk is illustrated by the central role of sovereign debt in previous international lending crises. In addition, sovereign debt is the largest asset class in emerging markets. In this essay, I provide a pricing formula for the evaluation of sovereign credit risk in which the decision to default on sovereign debt is made by the government. The pricing formula explains the variation across time in daily credit spreads - a widely used measure of credit risk - to a degree not offered by existing theoretical and empirical models. I use information on a country's stock market to compute the prevailing sovereign credit spread in that country. The pricing formula explains a substantial fraction of the time variation in daily credit spread changes for Brazil, Mexico, Peru, and Russia for the 1998-2008 period, particularly during the recent subprime crisis. I also show that when a government incentive to default is allowed to depend on current economic conditions, one can best explain the level of credit spreads, especially during the recent period of financial distress. In the third essay, I show that the risk of sovereign default abroad can produce adverse consequences for the U.S. equity market through a decrease in returns and an increase in volatility. The risk of sovereign default, which is no longer limited to emerging economies, has recently become a major concern for financial markets. While sovereign debt plays an increasing role in today's financial environment, the effects of sovereign credit risk on the U.S. financial markets have been largely ignored in the literature. In this essay, I develop a theoretical framework that explores how the risk of sovereign default abroad helps explain the level and the volatility of U.S. equity returns. The intuition for this effect is that negative economic shocks deteriorate the fiscal situation of foreign governments, thereby increasing the risk of a sovereign default that would trigger a local contraction in economic growth. The increased risk of an economic slowdown abroad amplifies the direct effect of these shocks on the level and the volatility of equity returns in the U.S. through two channels. The first channel involves a decrease in the future earnings of U.S. exporters resulting from unfavorable adjustments to the exchange rate. The second channel involves investors' incentives to rebalance their portfolios toward safer assets, which depresses U.S. equity prices. An empirical estimation of the model with monthly data for the 1994-2008 period provides evidence that the risk of sovereign default abroad generates a strong leverage effect during economic downturns, which helps to substantially explain the level and the volatility of U.S. equity returns.
Resumo:
Methods like Event History Analysis can show the existence of diffusion and part of its nature, but do not study the process itself. Nowadays, thanks to the increasing performance of computers, processes can be studied using computational modeling. This thesis presents an agent-based model of policy diffusion mainly inspired from the model developed by Braun and Gilardi (2006). I first start by developing a theoretical framework of policy diffusion that presents the main internal drivers of policy diffusion - such as the preference for the policy, the effectiveness of the policy, the institutional constraints, and the ideology - and its main mechanisms, namely learning, competition, emulation, and coercion. Therefore diffusion, expressed by these interdependencies, is a complex process that needs to be studied with computational agent-based modeling. In a second step, computational agent-based modeling is defined along with its most significant concepts: complexity and emergence. Using computational agent-based modeling implies the development of an algorithm and its programming. When this latter has been developed, we let the different agents interact. Consequently, a phenomenon of diffusion, derived from learning, emerges, meaning that the choice made by an agent is conditional to that made by its neighbors. As a result, learning follows an inverted S-curve, which leads to partial convergence - global divergence and local convergence - that triggers the emergence of political clusters; i.e. the creation of regions with the same policy. Furthermore, the average effectiveness in this computational world tends to follow a J-shaped curve, meaning that not only time is needed for a policy to deploy its effects, but that it also takes time for a country to find the best-suited policy. To conclude, diffusion is an emergent phenomenon from complex interactions and its outcomes as ensued from my model are in line with the theoretical expectations and the empirical evidence.Les méthodes d'analyse de biographie (event history analysis) permettent de mettre en évidence l'existence de phénomènes de diffusion et de les décrire, mais ne permettent pas d'en étudier le processus. Les simulations informatiques, grâce aux performances croissantes des ordinateurs, rendent possible l'étude des processus en tant que tels. Cette thèse, basée sur le modèle théorique développé par Braun et Gilardi (2006), présente une simulation centrée sur les agents des phénomènes de diffusion des politiques. Le point de départ de ce travail met en lumière, au niveau théorique, les principaux facteurs de changement internes à un pays : la préférence pour une politique donnée, l'efficacité de cette dernière, les contraintes institutionnelles, l'idéologie, et les principaux mécanismes de diffusion que sont l'apprentissage, la compétition, l'émulation et la coercition. La diffusion, définie par l'interdépendance des différents acteurs, est un système complexe dont l'étude est rendue possible par les simulations centrées sur les agents. Au niveau méthodologique, nous présenterons également les principaux concepts sous-jacents aux simulations, notamment la complexité et l'émergence. De plus, l'utilisation de simulations informatiques implique le développement d'un algorithme et sa programmation. Cette dernière réalisée, les agents peuvent interagir, avec comme résultat l'émergence d'un phénomène de diffusion, dérivé de l'apprentissage, où le choix d'un agent dépend en grande partie de ceux faits par ses voisins. De plus, ce phénomène suit une courbe en S caractéristique, poussant à la création de régions politiquement identiques, mais divergentes au niveau globale. Enfin, l'efficacité moyenne, dans ce monde simulé, suit une courbe en J, ce qui signifie qu'il faut du temps, non seulement pour que la politique montre ses effets, mais également pour qu'un pays introduise la politique la plus efficace. En conclusion, la diffusion est un phénomène émergent résultant d'interactions complexes dont les résultats du processus tel que développé dans ce modèle correspondent tant aux attentes théoriques qu'aux résultats pratiques.
Resumo:
This thesis concerns the role of scientific expertise in the decision-making process at the Swiss federal level of government. It aims to understand how institutional and issue-specific factors influence three things: the distribution of access to scientific expertise, its valuation by participants in policy for- mulation, and the consequence(s) its mobilization has on policy politics and design. The theoretical framework developed builds on the assumption that scientific expertise is a strategic resource. In order to effectively mobilize this resource, actors require financial and organizational resources, as well as the conviction that it can advance their instrumental interests within a particular action situation. Institutions of the political system allocate these financial and organizational resources, influence the supply of scientific expertise, and help shape the venue of its deployment. Issue structures, in turn, condition both interaction configurations and the way in which these are anticipated by actors. This affects the perceived utility of expertise mobilization, mediating its consequences. The findings of this study show that the ability to access and control scientific expertise is strongly concentrated in the hands of the federal administration. Civil society actors have weak capacities to mobilize it, and the autonomy of institutionalized advisory bodies is limited. Moreover, the production of scientific expertise is undergoing a process of professionalization which strengthens the position of the federal administration as the (main) mandating agent. Despite increased political polarization and less inclu- sive decision-making, scientific expertise remains anchored in the policy subsystem, rather than being used to legitimate policy through appeals to the wider population. Finally, the structure of a policy problem matters both for expertise mobilization and for the latter's impact on the policy process, be- cause it conditions conflict structures and their anticipation. Structured problems result in a greater overlap between the principal of expertise mobilization and its intended audience, thereby increasing the chance that expertise shapes policy design. Conversely, less structured problems, especially those that involve conflicts about values and goals, reduce the impact of expertise.
Resumo:
International standardisation refers to voluntary technical specifications pertaining to the production and exchange of goods and services across borders. The paper outlines a theoretical framework which spells out the contention of emerging hybrid forms of non state authority in the global realm. It argues that international standardisation is confronted with a deep rift between promoters of further socialisation of international standards (i.e. a transfer of the universal scope of law into the official framework of standard-setting bodies) and multinational corporations in favour of globalisation of technical standards (i.e. universal recognition of minimal sectorial market-based standards). The problems related to the development of a possible ISO standard of system management in corporate social responsibility provides evidence of the argument.
Resumo:
We build a theoretical framework that allows for endogenous conflict behaviour (i.e., fighting efforts) and for endogenous natural resource exploitation (i.e., speed, ownership, and investments). While depletion is spread in a balanced Hotelling fashion during peace, the presence of conflict creates incentives for rapacious extraction, as this lowers the stakes of future contest. This voracious extraction depresses total oil revenue, especially if world oil demand is relatively elastic and the government's weapon advantage is weak. Some of these political distortions can be overcome by bribing rebels or by government investment in weapons. The shadow of conflict can also make less efficient nationalized oil extraction more attractive than private extraction, as insecure property rights create a holdup problem for the private firm and lead to a lower license fee. Furthermore, the government fights less intensely than the rebels under private exploitation, which leads to more government turnover. Without credible commitment to future fighting efforts, private oil depletion is only lucrative if the government's non-oil office rents are large and weaponry powerful, which guarantees the government a stronger grip on office and makes the holdup problem less severe.
Resumo:
Despite abundant research on work meaningfulness, the link between work meaningfulness and general ethical attitude at work has not been discussed so far. In this article, we propose a theoretical framework to explain how work meaningfulness contributes to enhanced ethical behavior. We argue that by providing a way for individuals to relate work to one's personal core values and identity, work meaningfulness leads to affective commitment - the involvement of one's cognitive, emotional, and physical resources. This, in turn, leads to engagement and so facilitates the integration of one's personal values in the daily work routines, and so reduces the risk of unethical behavior. On the contrary, anomie, that is, the absence of meaning and consequently of personal involvement, will lead to lower rational commitment rather than affective commitment, and consequently to disengagement and a-morality. We conclude with implications for the management of ethical attitudes.
Resumo:
Sex-biased dispersal is an almost ubiquitous feature of mammalian life history, but the evolutionary causes behind these patterns still require much clarification. A quarter of a century since the publication of seminal papers describing general patterns of sex-biased dispersal in both mammals and birds, we review the advances in our theoretical understanding of the evolutionary causes of sex-biased dispersal, and those in statistical genetics that enable us to test hypotheses and measure dispersal in natural populations. We use mammalian examples to illustrate patterns and proximate causes of sex-biased dispersal, because by far the most data are available and because they exhibit an enormous diversity in terms of dispersal strategy, mating and social systems. Recent studies using molecular markers have helped to confirm that sex-biased dispersal is widespread among mammals and varies widely in direction and intensity, but there is a great need to bridge the gap between genetic information, observational data and theory. A review of mammalian data indicates that the relationship between direction of sex-bias and mating system is not a simple one. The role of social systems emerges as a key factor in determining intensity and direction of dispersal bias, but there is still need for a theoretical framework that can account for the complex interactions between inbreeding avoidance, kin competition and cooperation to explain the impressive diversity of patterns.
Resumo:
In the first part of this research, three stages were stated for a program to increase the information extracted from ink evidence and maximise its usefulness to the criminal and civil justice system. These stages are (a) develop a standard methodology for analysing ink samples by high-performance thin layer chromatography (HPTLC) in reproducible way, when ink samples are analysed at different time, locations and by different examiners; (b) compare automatically and objectively ink samples; and (c) define and evaluate theoretical framework for the use of ink evidence in forensic context. This report focuses on the second of the three stages. Using the calibration and acquisition process described in the previous report, mathematical algorithms are proposed to automatically and objectively compare ink samples. The performances of these algorithms are systematically studied for various chemical and forensic conditions using standard performance tests commonly used in biometrics studies. The results show that different algorithms are best suited for different tasks. Finally, this report demonstrates how modern analytical and computer technology can be used in the field of ink examination and how tools developed and successfully applied in other fields of forensic science can help maximising its impact within the field of questioned documents.
Resumo:
A character network represents relations between characters from a text; the relations are based on text proximity, shared scenes/events, quoted speech, etc. Our project sketches a theoretical framework for character network analysis, bringing together narratology, both close and distant reading approaches, and social network analysis. It is in line with recent attempts to automatise the extraction of literary social networks (Elson, 2012; Sack, 2013) and other studies stressing the importance of character- systems (Woloch, 2003; Moretti, 2011). The method we use to build the network is direct and simple. First, we extract co-occurrences from a book index, without the need for text analysis. We then describe the narrative roles of the characters, which we deduce from their respective positions in the network, i.e. the discourse. As a case study, we use the autobiographical novel Les Confessions by Jean-Jacques Rousseau. We start by identifying co-occurrences of characters in the book index of our edition (Slatkine, 2012). Subsequently, we compute four types of centrality: degree, closeness, betweenness, eigenvector. We then use these measures to propose a typology of narrative roles for the characters. We show that the two parts of Les Confessions, written years apart, are structured around mirroring central figures that bear similar centrality scores. The first part revolves around the mentor of Rousseau; a figure of openness. The second part centres on a group of schemers, depicting a period of deep paranoia. We also highlight characters with intermediary roles: they provide narrative links between the societies in the life of the author. The method we detail in this complete case study of character network analysis can be applied to any work documented by an index. Un réseau de personnages modélise les relations entre les personnages d'un récit : les relations sont basées sur une forme de proximité dans le texte, l'apparition commune dans des événements, des citations dans des dialogues, etc. Notre travail propose un cadre théorique pour l'analyse des réseaux de personnages, rassemblant narratologie, close et distant reading, et analyse des réseaux sociaux. Ce travail prolonge les tentatives récentes d'automatisation de l'extraction de réseaux sociaux tirés de la littérature (Elson, 2012; Sack, 2013), ainsi que les études portant sur l'importance des systèmes de personnages (Woloch, 2003; Moretti, 2011). La méthode que nous utilisons pour construire le réseau est directe et simple. Nous extrayons les co-occurrences d'un index sans avoir recours à l'analyse textuelle. Nous décrivons les rôles narratifs des personnages en les déduisant de leurs positions relatives dans le réseau, donc du discours. Comme étude de cas, nous avons choisi le roman autobiographique Les Confessions, de Jean- Jacques Rousseau. Nous déduisons les co-occurrences entre personnages de l'index présent dans l'édition Slatkine (Rousseau et al., 2012). Sur le réseau obtenu, nous calculons quatre types de centralité : le degré, la proximité, l'intermédiarité et la centralité par vecteur propre. Nous utilisons ces mesures pour proposer une typologie des rôles narratifs des personnages. Nous montrons que les deux parties des Confessions, écrites à deux époques différentes, sont structurées autour de deux figures centrales, qui obtiennent des mesures de centralité similaires. La première partie est construite autour du mentor de Rousseau, qui a symbolisé une grande ouverture. La seconde partie se focalise sur un groupe de comploteurs, et retrace une période marquée par la paranoïa chez l'auteur. Nous mettons également en évidence des personnages jouant des rôles intermédiaires, et de fait procurant un lien narratif entre les différentes sociétés couvrant la vie de l'auteur. La méthode d'analyse des réseaux de personnages que nous décrivons peut être appliquée à tout texte de fiction comportant un index.
Resumo:
BACKGROUND: Pediatric rheumatic diseases have a significant impact on children's quality of life and family functioning. Disease control and management of the symptoms are important to minimize disability and pain. Specialist clinical nurses play a key role in supporting medical teams, recognizing poor disease control and the need for treatment changes, providing a resource to patients on treatment options and access to additional support and advice, and identifying best practices to achieve optimal outcomes for patients and their families. This highlights the importance of investigating follow-up telenursing (TN) consultations with experienced, specialist clinical nurses in rheumatology to provide this support to children and their families. METHODS/DESIGN: This randomized crossover, experimental longitudinal study will compare the effects of standard care against a novel telenursing consultation on children's and family outcomes. It will examine children below 16 years old, recently diagnosed with inflammatory rheumatic diseases, who attend the pediatric rheumatology outpatient clinic of a tertiary referral hospital in western Switzerland, and one of their parents. The telenursing consultation, at least once a month, by a qualified, experienced, specialist nurse in pediatric rheumatology will consist of providing affective support, health information, and aid to decision-making. Cox's Interaction Model of Client Health Behavior serves as the theoretical framework for this study. The primary outcome measure is satisfaction and this will be assessed using mixed methods (quantitative and qualitative data). Secondary outcome measures include disease activity, quality of life, adherence to treatment, use of the telenursing service, and cost. We plan to enroll 56 children. DISCUSSION: The telenursing consultation is designed to support parents and children/adolescents during the course of the disease with regular follow-up. This project is novel because it is based on a theoretical standardized intervention, yet it allows for individualized care. We expect this trial to confirm the importance of support by a clinical specialist nurse in improving outcomes for children and adolescents with inflammatory rheumatisms. TRIAL REGISTRATION: ClinicalTrial.gov identifier: NCT01511341 (December 1st, 2012).