870 resultados para [JEL:C70] Mathematical and Quantitative Methods - Game Theory and Bargaining Theory - General
Resumo:
Esta dissertação apresenta os principais aspectos da Teoria dos Jogos, mostrando sua aplicação como instrumento analítico na Gestão de Pessoas no que diz respeito à variável salário. Considera a organização e o trabalhador como conceitos gerais, sem identificar o setor de atuação, ramo de atividade, classificação jurídica em função do seu faturamento, total de empregados ou participação de mercado dessa organização. Da mesma forma o conceito trabalhador não recebe qualquer identificação em relação ao setor de atividade onde trabalha, função, salário ou formação profissional. A organização é toda estrutura que gera bens e serviços para a sociedade e o trabalhador é todo elemento que emprega sua força de trabalho na produção de bens e serviços. Os objetivos estabelecidos para este estudo são: identificar as possibilidades de aplicação da Teoria dos Jogos na Gestão de Pessoas considerando a variável salário como elemento de conflito entre a organização e o trabalhador; mostrar se a forma de representação extensiva é mais apropriada ou não para analisar o cenário de embate na decisão de contratar ou não o trabalhador ou pagar mais ou menos salário e a existência do Equilíbrio de Nash. A metodologia qualitativa com apoio bibliográfico e documental caracteriza esta pesquisa qualitativa quanto a metodologia de pesquisa. Os métodos qualitativos contribuem para interpretar fenômenos do cotidiano, podendo ser composto por dados simbólicos situados em determinado contexto. A pesquisa documental é uma contribuição importante ao estudo do tema proposto, já que a pesquisa qualitativa não é uma proposta rigidamente estruturada e isto permite que o pesquisador use a imaginação e criatividade para atingir o objetivo. Os resultados obtidos pela pesquisa dão conta de que é possível a aplicação da Teoria dos Jogos na Gestão de Pessoas considerando o embate entre os jogadores (o trabalhador e a organização) em torno do salário, discutido no capítulo 4 nas representações da matriz de payoff de um jogo estratégico e nas figuras 9,10,11,e 16. A representação na forma extensiva, outro objetivo, indicando os payoffs entre duas decisões centrais representadas por X = flexibilização com renúncia dos direitos pelos trabalhadores e Y = flexibilização/adaptação/negociação, conforme figura 16. O gestor de pessoas percebe as estratégias existentes para a organização e trabalhador para a tomada de decisão, ao mesmo tempo em que pode avaliar a situação que esteja vivendo e fazer simulações em busca de novas propostas. Por fim, o Equilíbrio de Nash para a aplicação na Gestão de Pessoas é discutido no item 4.1.3, sendo possível verificar que tanto o trabalhador como a organização podem chegar a uma decisão favorável para ambos e manter seus objetivos pretendidos inicialmente. Na figura 17, esse equilíbrio é apresentado depois da tomada de decisão do trabalhador pela proposta feita pela organização na sequência O2 e o trabalhador ficou com o ramo de sequência T2 com o valor de 20 moedas. A potencialidade da Teoria dos Jogos na Gestão de Pessoas está no fato de que quem atua em uma organização compartilha resultados bons ou ruins obtidos pelas escolhas alheias, individuais e construídas coletivamente.
Resumo:
Esta dissertação apresenta os principais aspectos da Teoria dos Jogos, mostrando sua aplicação como instrumento analítico na Gestão de Pessoas no que diz respeito à variável salário. Considera a organização e o trabalhador como conceitos gerais, sem identificar o setor de atuação, ramo de atividade, classificação jurídica em função do seu faturamento, total de empregados ou participação de mercado dessa organização. Da mesma forma o conceito trabalhador não recebe qualquer identificação em relação ao setor de atividade onde trabalha, função, salário ou formação profissional. A organização é toda estrutura que gera bens e serviços para a sociedade e o trabalhador é todo elemento que emprega sua força de trabalho na produção de bens e serviços. Os objetivos estabelecidos para este estudo são: identificar as possibilidades de aplicação da Teoria dos Jogos na Gestão de Pessoas considerando a variável salário como elemento de conflito entre a organização e o trabalhador; mostrar se a forma de representação extensiva é mais apropriada ou não para analisar o cenário de embate na decisão de contratar ou não o trabalhador ou pagar mais ou menos salário e a existência do Equilíbrio de Nash. A metodologia qualitativa com apoio bibliográfico e documental caracteriza esta pesquisa qualitativa quanto a metodologia de pesquisa. Os métodos qualitativos contribuem para interpretar fenômenos do cotidiano, podendo ser composto por dados simbólicos situados em determinado contexto. A pesquisa documental é uma contribuição importante ao estudo do tema proposto, já que a pesquisa qualitativa não é uma proposta rigidamente estruturada e isto permite que o pesquisador use a imaginação e criatividade para atingir o objetivo. Os resultados obtidos pela pesquisa dão conta de que é possível a aplicação da Teoria dos Jogos na Gestão de Pessoas considerando o embate entre os jogadores (o trabalhador e a organização) em torno do salário, conforme pode ser visto no capítulo 4 nas representações da matriz de payoff de um jogo estratégico e nas figuras 9,10,11,e 16. A representação na forma extensiva, constitui outro objetivo, indicando os payoffs entre duas decisões centrais representadas por X = flexibilização com renúncia dos direitos pelos trabalhadores e Y = flexibilização/adaptação/negociação, conforme figura 16. Ao analisar a figura, o gestor de pessoas percebe as estratégias existentes para a organização e trabalhador para a tomada de decisão, ao mesmo tempo em que pode avaliar a situação que esteja vivendo e fazer simulações em busca de novas propostas. Por fim, o Equilíbrio de Nash para a aplicação na Gestão de Pessoas é discutido no item 4.1.3, sendo possível verificar que tanto o trabalhador como a organização podem chegar a uma decisão favorável para ambos e manter seus objetivos pretendidos inicialmente. Na figura 17, esse equilíbrio é apresentado depois da tomada de decisão do trabalhador pela proposta feita pela organização na sequência O2 e o trabalhador ficou com o ramo de sequência T2 com o valor de 20 moedas. A potencialidade da Teoria dos Jogos na Gestão de Pessoas surge do fato de que quem atua em uma organização compartilha resultados bons ou ruins obtidos pelas escolhas alheias, escolhas individuais e pelas escolhas construídas coletivamente. Quando o trabalhador resolve produzir menos, a empresa sofre com a perda do lucro gerado pelo ritmo mais lento de trabalho. Para mudar esse quadro, a empresa toma a decisão de aumentar o salário e o trabalhador por sua vez desenvolve a tarefa com maior velocidade e em maior quantidade e ela pode retomar o seu lucro. Nesses jogos há cobranças de desempenho, exigência para atingir metas, pressões, conflitos com clientes e lideranças. Logo, a Teoria dos Jogos pode ser aplicada como instrumento para o gestor de Pessoas avaliar a situação vivida para a tomada de decisão que resolva a situação de embate.
Resumo:
UK schools and universities are trying to remedy a nationally recognized skills shortage in quantitative methods among graduates. This article describes and analyses a research project in German Studies funded by the Economic and Social Research Council (ESRC) aimed at addressing the issue. The interdisciplinary pilot project introduced quantitative methods into undergraduate curricula not only in Linguistics, but also in German Studies.
Resumo:
We review recent theoretical progress on the statistical mechanics of error correcting codes, focusing on low-density parity-check (LDPC) codes in general, and on Gallager and MacKay-Neal codes in particular. By exploiting the relation between LDPC codes and Ising spin systems with multispin interactions, one can carry out a statistical mechanics based analysis that determines the practical and theoretical limitations of various code constructions, corresponding to dynamical and thermodynamical transitions, respectively, as well as the behaviour of error-exponents averaged over the corresponding code ensemble as a function of channel noise. We also contrast the results obtained using methods of statistical mechanics with those derived in the information theory literature, and show how these methods can be generalized to include other channel types and related communication problems.
Resumo:
We present a theoretical method for a direct evaluation of the average and reliability error exponents in low-density parity-check error-correcting codes using methods of statistical physics. Results for the binary symmetric channel are presented for codes of both finite and infinite connectivity.
Resumo:
We propose a new mathematical model for efficiency analysis, which combines DEA methodology with an old idea-Ratio Analysis. Our model, called DEA-R, treats all possible ratios "output/input" as outputs within the standard DEA model. Although DEA and DEA-R generate different summary measures for efficiency, the two measures are comparable. Our mathematical and empirical comparisons establish the validity of DEA-R model in its own right. The key advantage of DEA-R over DEA is that it allows effective integration of the model with experts' opinions via flexible restrictive conditions on individual "output/input" pairs. © 2007 Springer Science+Business Media, LLC.
Resumo:
In the present work, the more important parameters of the heat pump system and of solar assisted heat pump systems were analysed in a quantitative way. Ideal and real Rankine cycles applied to the heat pump, with and without subcooling and superheating were studied using practical recommended values for their thermodynamics parameters. Comparative characteristics of refrigerants here analysed looking for their applicability in heat pumps for domestic heating and their effect in the performance of the system. Curves for the variation of the coefficient of performance as a function of condensing and evaporating temperatures were prepared for R12. Air, water and earth as low-grade heat sources and basic heat pump design factors for integrated heat pumps and thermal stores and for solar assisted heat pump-series, parallel and dual-systems were studied. The analysis of the relative performance of these systems demonstrated that the dual system presents advantages in domestic applications. An account of energy requirements for space and hater heating in the domestic sector in the O.K. is presented. The expected primary energy savings by using heat pumps to provide for the heating demand of the domestic sector was found to be of the order of 7%. The availability of solar energy in the U.K. climatic conditions and the characteristics of the solar radiation here studied. Tables and graphical representations in order to calculate the incident solar radiation over a tilted roof were prepared and are given in this study in section IV. In order to analyse and calculate the heating load for the system, new mathematical and graphical relations were developed in section V. A domestic space and water heating system is described and studied. It comprises three main components: a solar radiation absorber, the normal roof of a house, a split heat pump and a thermal store. A mathematical study of the heat exchange characteristics in the roof structure was done. This permits to evaluate the energy collected by the roof acting as a radiation absorber and its efficiency. An indication of the relative contributions from the three low-grade sources: ambient air, solar boost and heat loss from the house to the roof space during operation is given in section VI, together with the average seasonal performance and the energy saving for a prototype system tested at the University of Aston. The seasonal performance as found to be 2.6 and the energy savings by using the system studied 61%. A new store configuration to reduce wasted heat losses is also discussed in section VI.
Resumo:
The rapid global loss of biodiversity has led to a proliferation of systematic conservation planning methods. In spite of their utility and mathematical sophistication, these methods only provide approximate solutions to real-world problems where there is uncertainty and temporal change. The consequences of errors in these solutions are seldom characterized or addressed. We propose a conceptual structure for exploring the consequences of input uncertainty and oversimpli?ed approximations to real-world processes for any conservation planning tool or strategy. We then present a computational framework based on this structure to quantitatively model species representation and persistence outcomes across a range of uncertainties. These include factors such as land costs, landscape structure, species composition and distribution, and temporal changes in habitat. We demonstrate the utility of the framework using several reserve selection methods including simple rules of thumb and more sophisticated tools such as Marxan and Zonation. We present new results showing how outcomes can be strongly affected by variation in problem characteristics that are seldom compared across multiple studies. These characteristics include number of species prioritized, distribution of species richness and rarity, and uncertainties in the amount and quality of habitat patches. We also demonstrate how the framework allows comparisons between conservation planning strategies and their response to error under a range of conditions. Using the approach presented here will improve conservation outcomes and resource allocation by making it easier to predict and quantify the consequences of many different uncertainties and assumptions simultaneously. Our results show that without more rigorously generalizable results, it is very dif?cult to predict the amount of error in any conservation plan. These results imply the need for standard practice to include evaluating the effects of multiple real-world complications on the behavior of any conservation planning method.
Resumo:
We present a mean field theory of code-division multiple access (CDMA) systems with error-control coding. On the basis of the relation between the free energy and mutual information, we obtain an analytical expression of the maximum spectral efficiency of the coded CDMA system, from which a mean field description of the coded CDMA system is provided in terms of a bank of scalar Gaussian channels whose variances in general vary at different code symbol positions. Regular low-density parity-check (LDPC)-coded CDMA systems are also discussed as an example of the coded CDMA systems.
Resumo:
It is proposed that, for rural secondary schoolgirls, school is a site of contestation. Rural girls attempt to `use' school as a means of resisting traditional patriarchal definitions of a `woman's place'. In their efforts, the girls are thwarted by aspects of the school itself, the behaviour and attitudes of the boys in school, and also the `careers advice' which they receive. It is argued that the girls perceive school as being of greater importance to them than is the case for the boys, and that these gender differentiated perceptions are related to the `social' lives of the girls and boys, and also to their future employment prospects. Unlike the boys, the girls experience considerable restrictions concerning these two areas. This theory was grounded in an ethnographic study which was conducted in and around a village in a rural county in England. As well as developing the theory through ethnography, the thesis contains tests of certain hypotheses generated by the theory. These hypotheses relate to the gender differentiated perspectives of secondary school pupils with regard to the areas of school itself, life outside school, and expectations for the future. The quantitative methods used to test these hypotheses confirm that there is a tendency for girls to be more positively orientated to school than the boys; to feel less able to engage in preferred activities outside school time than the boys, and also to be more willing to move away from the area than the boys. For comparative purposes these hypotheses were tested in two other rural locations and the results indicate the need for further research of a quantitative kind into the context of girls' schooling in such locations. A critical review of literature is presented, as is a detailed discussion of the research process itself.
Resumo:
Aims: Previous data suggest heterogeneity in laminar distribution of the pathology in the molecular disorder frontotemporal lobar degeneration (FTLD) with transactive response (TAR) DNA-binding protein of 43kDa (TDP-43) proteinopathy (FTLD-TDP). To study this heterogeneity, we quantified the changes in density across the cortical laminae of neuronal cytoplasmic inclusions, glial inclusions, neuronal intranuclear inclusions, dystrophic neurites, surviving neurones, abnormally enlarged neurones, and vacuoles in regions of the frontal and temporal lobe. Methods: Changes in density of histological features across cortical gyri were studied in 10 sporadic cases of FTLD-TDP using quantitative methods and polynomial curve fitting. Results: Our data suggest that laminar neuropathology in sporadic FTLD-TDP is highly variable. Most commonly, neuronal cytoplasmic inclusions, dystrophic neurites and vacuolation were abundant in the upper laminae and glial inclusions, neuronal intranuclear inclusions, abnormally enlarged neurones, and glial cell nuclei in the lower laminae. TDP-43-immunoreactive inclusions affected more of the cortical profile in longer duration cases; their distribution varied with disease subtype, but was unrelated to Braak tangle score. Different TDP-43-immunoreactive inclusions were not spatially correlated. Conclusions: Laminar distribution of pathological features in 10 sporadic cases of FTLD-TDP is heterogeneous and may be accounted for, in part, by disease subtype and disease duration. In addition, the feedforward and feedback cortico-cortical connections may be compromised in FTLD-TDP. © 2012 The Authors. Neuropathology and Applied Neurobiology © 2012 British Neuropathological Society.
Resumo:
Although crisp data are fundamentally indispensable for determining the profit Malmquist productivity index (MPI), the observed values in real-world problems are often imprecise or vague. These imprecise or vague data can be suitably characterized with fuzzy and interval methods. In this paper, we reformulate the conventional profit MPI problem as an imprecise data envelopment analysis (DEA) problem, and propose two novel methods for measuring the overall profit MPI when the inputs, outputs, and price vectors are fuzzy or vary in intervals. We develop a fuzzy version of the conventional MPI model by using a ranking method, and solve the model with a commercial off-the-shelf DEA software package. In addition, we define an interval for the overall profit MPI of each decision-making unit (DMU) and divide the DMUs into six groups according to the intervals obtained for their overall profit efficiency and MPIs. We also present two numerical examples to demonstrate the applicability of the two proposed models and exhibit the efficacy of the procedures and algorithms. © 2011 Elsevier Ltd.
Resumo:
Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models.
Resumo:
The postgenomic era, as manifest, inter alia, by proteomics, offers unparalleled opportunities for the efficient discovery of safe, efficacious, and novel subunit vaccines targeting a tranche of modern major diseases. A negative corollary of this opportunity is the risk of becoming overwhelmed by this embarrassment of riches. Informatics techniques, working to address issues of both data management and through prediction to shortcut the experimental process, can be of enormous benefit in leveraging the proteomic revolution.In this disquisition, we evaluate proteomic approaches to the discovery of subunit vaccines, focussing on viral, bacterial, fungal, and parasite systems. We also adumbrate the impact that proteomic analysis of host-pathogen interactions can have. Finally, we review relevant methods to the prediction of immunome, with special emphasis on quantitative methods, and the subcellular localization of proteins within bacteria.
Resumo:
This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.