903 resultados para Theoretical and empirical synthesis
Resumo:
Bibliography: p. 22-23.
Resumo:
Over the past forty years the corporate identity literature has developed to a point of maturity where it currently contains many definitions and models of the corporate identity construct at the organisational level. The literature has evolved by developing models of corporate identity or in considering corporate identity in relation to new and developing themes, e.g. corporate social responsibility. It has evolved into a multidisciplinary domain recently incorporating constructs from other literature to further its development. However, the literature has a number of limitations. It remains that an overarching and universally accepted definition of corporate identity is elusive, potentially leaving the construct with a lack of clear definition. Only a few corporate identity definitions and models, at the corporate level, have been empirically tested. The corporate identity construct is overwhelmingly defined and theoretically constructed at the corporate level, leaving the literature without a detailed understanding of its influence at an individual stakeholder level. Front-line service employees (FLEs), form a component in a number of corporate identity models developed at the organisational level. FLEs deliver the services of an organisation to its customers, as well as represent the organisation by communicating and transporting its core defining characteristics to customers through continual customer contact and interaction. This person-to-person contact between an FLE and the customer is termed a service encounter, where service encounters influence a customer’s perception of both the service delivered and the associated level of service quality. Therefore this study for the first time defines, theoretically models and empirically tests corporate identity at the individual FLE level, termed FLE corporate identity. The study uses the services marketing literature to characterise an FLE’s operating environment, arriving at five potential dimensions to the FLE corporate identity construct. These are scrutinised against existing corporate identity definitions and models to arrive at a definition for the construct. In reviewing the corporate identity, services marketing, branding and organisational psychology literature, a theoretical model is developed for FLE corporate identity, which is empirically and quantitatively tested, with FLEs in seven stores of a major national retailer. Following rigorous construct reliability and validity testing, the 601 usable responses are used to estimate a confirmatory factor analysis and structural equation model for the study. The results for the individual hypotheses and the structural model are very encouraging, as they fit the data well and support a definition of FLE corporate identity. This study makes contributions to the branding, services marketing and organisational psychology literature, but its principal contribution is to extend the corporate identity literature into a new area of discourse and research, that of FLE corporate identity
Resumo:
Background - This review provides a worked example of ‘best fit’ framework synthesis using the Theoretical Domains Framework (TDF) of health psychology theories as an a priori framework in the synthesis of qualitative evidence. Framework synthesis works best with ‘policy urgent’ questions. Objective - The review question selected was: what are patients’ experiences of prevention programmes for cardiovascular disease (CVD) and diabetes? The significance of these conditions is clear: CVD claims more deaths worldwide than any other; diabetes is a risk factor for CVD and leading cause of death. Method - A systematic review and framework synthesis were conducted. This novel method for synthesizing qualitative evidence aims to make health psychology theory accessible to implementation science and advance the application of qualitative research findings in evidence-based healthcare. Results - Findings from 14 original studies were coded deductively into the TDF and subsequently an inductive thematic analysis was conducted. Synthesized findings produced six themes relating to: knowledge, beliefs, cues to (in)action, social influences, role and identity, and context. A conceptual model was generated illustrating combinations of factors that produce cues to (in)action. This model demonstrated interrelationships between individual (beliefs and knowledge) and societal (social influences, role and identity, context) factors. Conclusion - Several intervention points were highlighted where factors could be manipulated to produce favourable cues to action. However, a lack of transparency of behavioural components of published interventions needs to be corrected and further evaluations of acceptability in relation to patient experience are required. Further work is needed to test the comprehensiveness of the TDF as an a priori framework for ‘policy urgent’ questions using ‘best fit’ framework synthesis.
Resumo:
The purposes of the present multistudy were to develop and provide initial construct validity for measures based on the model of parental involvement in sport (Study 1) and examine structural relationships among the constructs of the model (Study 2). In Study 1 (nparents = 342, nathletes = 223), a confirmatory factor analysis was used to verify the psychometric properties of the measures. Content and construct validity were evaluated, as well individual and composite reliability. Multi-group analysis with two independent samples provided evidence of factorial invariance. In Study 2 (nparents = 754, nathletes = 438), structural equation modeling analysis supported the hypothesised model in which athletes’ perceptions of parents’ behaviours mediated the relationship between parents’ reported behaviours and the athletes’ psychological variables conducive to their achievement in sport. The findings provide support for the parental involvement in sport model and demonstrate the role of perceptions of parents’ behaviours on young athletes’ cognitions in sport.
Resumo:
BACKGROUND: Regional differences in physician supply can be found in many health care systems, regardless of their organizational and financial structure. A theoretical model is developed for the physicians' decision on office allocation, covering demand-side factors and a consumption time function. METHODS: To test the propositions following the theoretical model, generalized linear models were estimated to explain differences in 412 German districts. Various factors found in the literature were included to control for physicians' regional preferences. RESULTS: Evidence in favor of the first three propositions of the theoretical model could be found. Specialists show a stronger association to higher populated districts than GPs. Although indicators for regional preferences are significantly correlated with physician density, their coefficients are not as high as population density. CONCLUSIONS: If regional disparities should be addressed by political actions, the focus should be to counteract those parameters representing physicians' preferences in over- and undersupplied regions.
Resumo:
Understanding why market manipulation is conducted, under which conditions it is the most profitable and investigating the magnitude of these practices are crucial questions for financial regulators. Closing price manipulation induced by derivatives’ expiration is the primary subject of this thesis. The first chapter provides a mathematical framework in continuous time to study the incentive to manipulate a set of securities induced by a derivative position. An agent holding a European-type contingent claim, depending on the price of a basket of underlying securities, is considered. The agent can affect the price of the underlying securities by trading on each of them before expiration. The elements of novelty are at least twofold: (1) a multi-asset market is considered; (2) the problem is solved by means of both classic optimisation and stochastic control techniques. Both linear and option payoffs are considered. In the second chapter an empirical investigation is conducted on the existence of expiration day effects on the UK equity market. Intraday data on FTSE 350 stocks over a six-year period from 2015-2020 are used. The results show that the expiration of index derivatives is associated with a rise in both trading activity and volatility, together with significant price distortions. The expiration of single stock options appears to have little to no impact on the underlying securities. The last chapter examines the existence of patterns in line with closing price manipulation of UK stocks on option expiration days. The main contributions are threefold: (1) this is one of the few empirical studies on manipulation induced by the options market; (2) proprietary equity orderbook and transaction data sets are used to define manipulation proxies, providing a more detailed analysis; (3) the behaviour of proprietary trading firms is studied. Despite the industry concerns, no evidence is found of this type of manipulative behaviour.
Resumo:
As estratégias de malevolência implicam que um indivíduo pague um custo para infligir um custo superior a um oponente. Como um dos comportamentos fundamentais da sociobiologia, a malevolência tem recebido menos atenção que os seus pares o egoísmo e a cooperação. Contudo, foi estabelecido que a malevolência é uma estratégia viável em populações pequenas quando usada contra indivíduos negativamente geneticamente relacionados pois este comportamento pode i) ser eliminado naturalmente, ou ii) manter-se em equilíbrio com estratégias cooperativas devido à disponibilidade da parte de indivíduos malevolentes de pagar um custo para punir. Esta tese propõe compreender se a propensão para a malevolência nos humanos é inerente ou se esta se desenvolve com a idade. Para esse efeito, considerei duas experiências de teoria de jogos em crianças em ambiente escolar com idades entre os 6 e os 22 anos. A primeira, um jogo 2x2 foi testada com duas variantes: 1) um prémio foi atribuído a ambos os jogadores, proporcionalmente aos pontos acumulados; 2), um prémio foi atribuído ao jogador com mais pontos. O jogo foi desenhado com o intuito de causar o seguinte dilema a cada jogador: i) maximizar o seu ganho e arriscar ter menos pontos que o adversário; ou ii) decidir não maximizar o seu ganho, garantindo que este não era inferior ao do seu adversário. A segunda experiência consistia num jogo do ditador com duas opções: uma escolha egoísta/altruísta (A), onde o ditador recebia mais ganho, mas o seu recipiente recebia mais que ele e uma escolha malevolente (B) que oferecia menos ganhos ao ditador que a A mas mais ganhos que o recipiente. O dilema era que se as crianças se comportassem de maneira egoísta, obtinham mais ganho para si, ao mesmo tempo que aumentavam o ganho do seu colega. Se fossem malevolentes, então prefeririam ter mais ganho que o seu colega ao mesmo tempo que tinham menos para eles próprios. As experiências foram efetuadas em escolas de duas áreas distintas de Portugal (continente e Açores) para perceber se as preferências malevolentes aumentavam ou diminuíam com a idade. Os resultados na primeira experiência sugerem que (1) os alunos compreenderam a primeira variante como um jogo de coordenação e comportaram-se como maximizadores, copiando as jogadas anteriores dos seus adversários; (2) que os alunos repetentes se comportaram preferencialmente como malevolentes, mais frequentemente que como maximizadores, com especial ênfase para os alunos de 14 anos; (3) maioria dos alunos comportou-se reciprocamente desde os 12 até aos 16 anos de idade, após os quais começaram a desenvolver uma maior tolerância às escolhas dos seus parceiros. Os resultados da segunda experiência sugerem que (1) as estratégias egoístas eram prevalentes até aos 6 anos de idade, (2) as tendências altruístas emergiram até aos 8 anos de idade e (3) as estratégias de malevolência começaram a emergir a partir dos 8 anos de idade. Estes resultados complementam a literatura relativamente escassa sobre malevolência e sugerem que este comportamento está intimamente ligado a preferências de consideração sobre os outros, o paroquialismo e os estágios de desenvolvimento das crianças.************************************************************Spite is defined as an act that causes loss of payoff to an opponent at a cost to the actor. As one of the four fundamental behaviours in sociobiology, it has received far less attention than its counterparts selfishness and cooperation. It has however been established as a viable strategy in small populations when used against negatively related individuals. Because of this, spite can either i) disappear or ii) remain at equilibrium with cooperative strategies due to the willingness of spiteful individuals to pay a cost in order to punish. This thesis sets out to understand whether propensity for spiteful behaviour is inherent or if it develops with age. For that effect, two game-theoretical experiments were performed with schoolboys and schoolgirls aged 6 to 22. The first, a 2 x 2 game, was tested in two variants: 1) a prize was awarded to both players, proportional to accumulated points; 2), a prize was given to the player with most points. Each player faced the following dilemma: i) to maximise pay-off risking a lower pay-off than the opponent; or ii) not to maximise pay-off in order to cut down the opponent below their own. The second game was a dictator experiment with two choices, (A) a selfish/altruistic choice affording more payoff to the donor than B, but more to the recipient than to the donor, and (B) a spiteful choice that afforded less payoff to the donor than A, but even lower payoff to the recipient. The dilemma here was that if subjects behaved selfishly, they obtained more payoff for themselves, while at the same time increasing their opponent payoff. If they were spiteful, they would rather have more payoff than their colleague, at the cost of less for themselves. Experiments were run in schools in two different areas in Portugal (mainland and Azores) to understand whether spiteful preferences varied with age. Results in the first experiment suggested that (1) students understood the first variant as a coordination game and engaged in maximising behaviour by copying their opponent’s plays; (2) repeating students preferentially engaged in spiteful behaviour more often than maximising behaviour, with special emphasis on 14 year-olds; (3) most students engaged in reciprocal behaviour from ages 12 to 16, as they began developing higher tolerance for their opponent choices. Results for the second experiment suggested that (1) selfish strategies were prevalent until the age of 6, (2) altruistic tendencies emerged since then, and (3) spiteful strategies began being chosen more often by 8 year-olds. These results add to the relatively scarce body of literature on spite and suggest that this type of behaviour is closely tied with other-regarding preferences, parochialism and the children’s stages of development.
Resumo:
The preceding two editions of CoDaWork included talks on the possible considerationof densities as infinite compositions: Egozcue and D´ıaz-Barrero (2003) extended theEuclidean structure of the simplex to a Hilbert space structure of the set of densitieswithin a bounded interval, and van den Boogaart (2005) generalized this to the setof densities bounded by an arbitrary reference density. From the many variations ofthe Hilbert structures available, we work with three cases. For bounded variables, abasis derived from Legendre polynomials is used. For variables with a lower bound, westandardize them with respect to an exponential distribution and express their densitiesas coordinates in a basis derived from Laguerre polynomials. Finally, for unboundedvariables, a normal distribution is used as reference, and coordinates are obtained withrespect to a Hermite-polynomials-based basis.To get the coordinates, several approaches can be considered. A numerical accuracyproblem occurs if one estimates the coordinates directly by using discretized scalarproducts. Thus we propose to use a weighted linear regression approach, where all k-order polynomials are used as predictand variables and weights are proportional to thereference density. Finally, for the case of 2-order Hermite polinomials (normal reference)and 1-order Laguerre polinomials (exponential), one can also derive the coordinatesfrom their relationships to the classical mean and variance.Apart of these theoretical issues, this contribution focuses on the application of thistheory to two main problems in sedimentary geology: the comparison of several grainsize distributions, and the comparison among different rocks of the empirical distribution of a property measured on a batch of individual grains from the same rock orsediment, like their composition
Resumo:
The objective of the thesis is to structure and model the factors that contribute to and can be used in evaluating project success. The purpose of this thesis is to enhance the understanding of three research topics. The goal setting process, success evaluation and decision-making process are studied in the context of a project, business unitand its business environment. To achieve the objective three research questionsare posed. These are 1) how to set measurable project goals, 2) how to evaluateproject success and 3) how to affect project success with managerial decisions.The main theoretical contribution comes from deriving a synthesis of these research topics which have mostly been discussed apart from each other in prior research. The research strategy of the study has features from at least the constructive, nomothetical, and decision-oriented research approaches. This strategy guides the theoretical and empirical part of the study. Relevant concepts and a framework are composed on the basis of the prior research contributions within the problem area. A literature review is used to derive constructs of factors withinthe framework. They are related to project goal setting, success evaluation, and decision making. On the basis of this, the case study method is applied to complement the framework. The empirical data includes one product development program, three construction projects, as well as one organization development, hardware/software, and marketing project in their contexts. In two of the case studiesthe analytic hierarchy process is used to formulate a hierarchical model that returns a numerical evaluation of the degree of project success. It has its origin in the solution idea which in turn has its foundation in the notion of projectsuccess. The achieved results are condensed in the form of a process model thatintegrates project goal setting, success evaluation and decision making. The process of project goal setting is analysed as a part of an open system that includes a project, the business unit and its competitive environment. Four main constructs of factors are suggested. First, the project characteristics and requirements are clarified. The second and the third construct comprise the components of client/market segment attractiveness and sources of competitive advantage. Together they determine the competitive position of a business unit. Fourth, the relevant goals and the situation of a business unit are clarified to stress their contribution to the project goals. Empirical evidence is gained on the exploitation of increased knowledge and on the reaction to changes in the business environment during a project to ensure project success. The relevance of a successful project to a company or a business unit tends to increase the higher the reference level of project goals is set. However, normal performance or sometimes performance below this normal level is intentionally accepted. Success measures make project success quantifiable. There are result-oriented, process-oriented and resource-oriented success measures. The study also links result measurements to enablers that portray the key processes. The success measures can be classified into success domains determining the areas on which success is assessed. Empiricalevidence is gained on six success domains: strategy, project implementation, product, stakeholder relationships, learning situation and company functions. However, some project goals, like safety, can be assessed using success measures that belong to two success domains. For example a safety index is used for assessing occupational safety during a project, which is related to project implementation. Product safety requirements, in turn, are connected to the product characteristics and thus to the product-related success domain. Strategic success measures can be used to weave the project phases together. Empirical evidence on their static nature is gained. In order-oriented projects the project phases are oftencontractually divided into different suppliers or contractors. A project from the supplier's perspective can represent only a part of the ¿whole project¿ viewed from the client's perspective. Therefore static success measures are mostly used within the contractually agreed project scope and duration. Proof is also acquired on the dynamic use of operational success measures. They help to focus on the key issues during each project phase. Furthermore, it is shown that the original success domains and success measures, their weights and target values can change dynamically. New success measures can replace the old ones to correspond better with the emphasis of the particular project phase. This adjustment concentrates on the key decision milestones. As a conclusion, the study suggests a combination of static and dynamic success measures. Their linkage to an incentive system can make the project management proactive, enable fast feedback and enhancethe motivation of the personnel. It is argued that the sequence of effective decisions is closely linked to the dynamic control of project success. According to the used definition, effective decisions aim at adequate decision quality and decision implementation. The findings support that project managers construct and use a chain of key decision milestones to evaluate and affect success during aproject. These milestones can be seen as a part of the business processes. Different managers prioritise the key decision milestones to a varying degree. Divergent managerial perspectives, power, responsibilities and involvement during a project offer some explanation for this. Finally, the study introduces the use ofHard Gate and Soft Gate decision milestones. The managers may use the former milestones to provide decision support on result measurements and ad hoc critical conditions. In the latter milestones they may make intermediate success evaluation also on the basis of other types of success measures, like process and resource measures.
Resumo:
In Part I, theoretical derivations for Variational Monte Carlo calculations are compared with results from a numerical calculation of He; both indicate that minimization of the ratio estimate of Evar , denoted EMC ' provides different optimal variational parameters than does minimization of the variance of E MC • Similar derivations for Diffusion Monte Carlo calculations provide a theoretical justification for empirical observations made by other workers. In Part II, Importance sampling in prolate spheroidal coordinates allows Monte Carlo calculations to be made of E for the vdW molecule var He2' using a simplifying partitioning of the Hamiltonian and both an HF-SCF and an explicitly correlated wavefunction. Improvements are suggested which would permit the extension of the computational precision to the point where an estimate of the interaction energy could be made~
Resumo:
La prise de décision est un processus computationnel fondamental dans de nombreux aspects du comportement animal. Le modèle le plus souvent rencontré dans les études portant sur la prise de décision est appelé modèle de diffusion. Depuis longtemps, il explique une grande variété de données comportementales et neurophysiologiques dans ce domaine. Cependant, un autre modèle, le modèle d’urgence, explique tout aussi bien ces mêmes données et ce de façon parcimonieuse et davantage encrée sur la théorie. Dans ce travail, nous aborderons tout d’abord les origines et le développement du modèle de diffusion et nous verrons comment il a été établi en tant que cadre de travail pour l’interprétation de la plupart des données expérimentales liées à la prise de décision. Ce faisant, nous relèveront ses points forts afin de le comparer ensuite de manière objective et rigoureuse à des modèles alternatifs. Nous réexaminerons un nombre d’assomptions implicites et explicites faites par ce modèle et nous mettrons alors l’accent sur certains de ses défauts. Cette analyse servira de cadre à notre introduction et notre discussion du modèle d’urgence. Enfin, nous présenterons une expérience dont la méthodologie permet de dissocier les deux modèles, et dont les résultats illustrent les limites empiriques et théoriques du modèle de diffusion et démontrent en revanche clairement la validité du modèle d'urgence. Nous terminerons en discutant l'apport potentiel du modèle d'urgence pour l'étude de certaines pathologies cérébrales, en mettant l'accent sur de nouvelles perspectives de recherche.
Resumo:
The preceding two editions of CoDaWork included talks on the possible consideration of densities as infinite compositions: Egozcue and D´ıaz-Barrero (2003) extended the Euclidean structure of the simplex to a Hilbert space structure of the set of densities within a bounded interval, and van den Boogaart (2005) generalized this to the set of densities bounded by an arbitrary reference density. From the many variations of the Hilbert structures available, we work with three cases. For bounded variables, a basis derived from Legendre polynomials is used. For variables with a lower bound, we standardize them with respect to an exponential distribution and express their densities as coordinates in a basis derived from Laguerre polynomials. Finally, for unbounded variables, a normal distribution is used as reference, and coordinates are obtained with respect to a Hermite-polynomials-based basis. To get the coordinates, several approaches can be considered. A numerical accuracy problem occurs if one estimates the coordinates directly by using discretized scalar products. Thus we propose to use a weighted linear regression approach, where all k- order polynomials are used as predictand variables and weights are proportional to the reference density. Finally, for the case of 2-order Hermite polinomials (normal reference) and 1-order Laguerre polinomials (exponential), one can also derive the coordinates from their relationships to the classical mean and variance. Apart of these theoretical issues, this contribution focuses on the application of this theory to two main problems in sedimentary geology: the comparison of several grain size distributions, and the comparison among different rocks of the empirical distribution of a property measured on a batch of individual grains from the same rock or sediment, like their composition
Resumo:
En junio de 2000 el Departamento Nacional de Estadística de Colombia adopto una nueva definición de medición de desempleo siguiendo los estándares sugeridos por la organización Internacional del Trabajo (OIT). El cambio de definición implico una reducción de la tasa de desempleo en cerca de dos puntos porcentuales. En este documento contrastamos la experiencia colombiana con otra experiencias internacionales, y analizamos las implicaciones empíricas y teóricas de este cambio de definición usando dos tipos de estimaciones cuantitativas: en la primera se contrasta las principales características de las diferentes categorías clasificadas según la definición nueva y vieja de desempleo (empleado, desempleado y fuera de la fuerza laboral) usando el algoritmo EM; en la segunda se pone a prueba la implicación del desempleo estructural y su relación con el perfil educacional de personas desempleadas y las características teóricas que enfrentan los estándares de la OIT en la definición de empleo.
Resumo:
This paper reports CFD and experimental results of the characteristics of wall confluent jets in a room. The results presented show the behaviour of wall confluent jets in the form of velocity profiles, the spreading rate of jets on the surface, jets decay, etc. The empirical equations derived are compared with other types of air jets. In addition, the flow in wall confluent jets is compared with the flow in displacement ventilation supply, with regards to the vertical and horizontal spreading on the floor. It is concluded that the jet momentum of wall confluent jets can be more conserved than other jets. Thus, wall confluent jets have a greater spread over the floor than displacement flow.