948 resultados para [JEL:C70] Mathematical and Quantitative Methods - Game Theory and Bargaining Theory - General
Resumo:
We propose a new mathematical model for efficiency analysis, which combines DEA methodology with an old idea-Ratio Analysis. Our model, called DEA-R, treats all possible ratios "output/input" as outputs within the standard DEA model. Although DEA and DEA-R generate different summary measures for efficiency, the two measures are comparable. Our mathematical and empirical comparisons establish the validity of DEA-R model in its own right. The key advantage of DEA-R over DEA is that it allows effective integration of the model with experts' opinions via flexible restrictive conditions on individual "output/input" pairs. © 2007 Springer Science+Business Media, LLC.
Resumo:
In the present work, the more important parameters of the heat pump system and of solar assisted heat pump systems were analysed in a quantitative way. Ideal and real Rankine cycles applied to the heat pump, with and without subcooling and superheating were studied using practical recommended values for their thermodynamics parameters. Comparative characteristics of refrigerants here analysed looking for their applicability in heat pumps for domestic heating and their effect in the performance of the system. Curves for the variation of the coefficient of performance as a function of condensing and evaporating temperatures were prepared for R12. Air, water and earth as low-grade heat sources and basic heat pump design factors for integrated heat pumps and thermal stores and for solar assisted heat pump-series, parallel and dual-systems were studied. The analysis of the relative performance of these systems demonstrated that the dual system presents advantages in domestic applications. An account of energy requirements for space and hater heating in the domestic sector in the O.K. is presented. The expected primary energy savings by using heat pumps to provide for the heating demand of the domestic sector was found to be of the order of 7%. The availability of solar energy in the U.K. climatic conditions and the characteristics of the solar radiation here studied. Tables and graphical representations in order to calculate the incident solar radiation over a tilted roof were prepared and are given in this study in section IV. In order to analyse and calculate the heating load for the system, new mathematical and graphical relations were developed in section V. A domestic space and water heating system is described and studied. It comprises three main components: a solar radiation absorber, the normal roof of a house, a split heat pump and a thermal store. A mathematical study of the heat exchange characteristics in the roof structure was done. This permits to evaluate the energy collected by the roof acting as a radiation absorber and its efficiency. An indication of the relative contributions from the three low-grade sources: ambient air, solar boost and heat loss from the house to the roof space during operation is given in section VI, together with the average seasonal performance and the energy saving for a prototype system tested at the University of Aston. The seasonal performance as found to be 2.6 and the energy savings by using the system studied 61%. A new store configuration to reduce wasted heat losses is also discussed in section VI.
Resumo:
The rapid global loss of biodiversity has led to a proliferation of systematic conservation planning methods. In spite of their utility and mathematical sophistication, these methods only provide approximate solutions to real-world problems where there is uncertainty and temporal change. The consequences of errors in these solutions are seldom characterized or addressed. We propose a conceptual structure for exploring the consequences of input uncertainty and oversimpli?ed approximations to real-world processes for any conservation planning tool or strategy. We then present a computational framework based on this structure to quantitatively model species representation and persistence outcomes across a range of uncertainties. These include factors such as land costs, landscape structure, species composition and distribution, and temporal changes in habitat. We demonstrate the utility of the framework using several reserve selection methods including simple rules of thumb and more sophisticated tools such as Marxan and Zonation. We present new results showing how outcomes can be strongly affected by variation in problem characteristics that are seldom compared across multiple studies. These characteristics include number of species prioritized, distribution of species richness and rarity, and uncertainties in the amount and quality of habitat patches. We also demonstrate how the framework allows comparisons between conservation planning strategies and their response to error under a range of conditions. Using the approach presented here will improve conservation outcomes and resource allocation by making it easier to predict and quantify the consequences of many different uncertainties and assumptions simultaneously. Our results show that without more rigorously generalizable results, it is very dif?cult to predict the amount of error in any conservation plan. These results imply the need for standard practice to include evaluating the effects of multiple real-world complications on the behavior of any conservation planning method.
Resumo:
We present a mean field theory of code-division multiple access (CDMA) systems with error-control coding. On the basis of the relation between the free energy and mutual information, we obtain an analytical expression of the maximum spectral efficiency of the coded CDMA system, from which a mean field description of the coded CDMA system is provided in terms of a bank of scalar Gaussian channels whose variances in general vary at different code symbol positions. Regular low-density parity-check (LDPC)-coded CDMA systems are also discussed as an example of the coded CDMA systems.
Resumo:
It is proposed that, for rural secondary schoolgirls, school is a site of contestation. Rural girls attempt to `use' school as a means of resisting traditional patriarchal definitions of a `woman's place'. In their efforts, the girls are thwarted by aspects of the school itself, the behaviour and attitudes of the boys in school, and also the `careers advice' which they receive. It is argued that the girls perceive school as being of greater importance to them than is the case for the boys, and that these gender differentiated perceptions are related to the `social' lives of the girls and boys, and also to their future employment prospects. Unlike the boys, the girls experience considerable restrictions concerning these two areas. This theory was grounded in an ethnographic study which was conducted in and around a village in a rural county in England. As well as developing the theory through ethnography, the thesis contains tests of certain hypotheses generated by the theory. These hypotheses relate to the gender differentiated perspectives of secondary school pupils with regard to the areas of school itself, life outside school, and expectations for the future. The quantitative methods used to test these hypotheses confirm that there is a tendency for girls to be more positively orientated to school than the boys; to feel less able to engage in preferred activities outside school time than the boys, and also to be more willing to move away from the area than the boys. For comparative purposes these hypotheses were tested in two other rural locations and the results indicate the need for further research of a quantitative kind into the context of girls' schooling in such locations. A critical review of literature is presented, as is a detailed discussion of the research process itself.
Resumo:
Aims: Previous data suggest heterogeneity in laminar distribution of the pathology in the molecular disorder frontotemporal lobar degeneration (FTLD) with transactive response (TAR) DNA-binding protein of 43kDa (TDP-43) proteinopathy (FTLD-TDP). To study this heterogeneity, we quantified the changes in density across the cortical laminae of neuronal cytoplasmic inclusions, glial inclusions, neuronal intranuclear inclusions, dystrophic neurites, surviving neurones, abnormally enlarged neurones, and vacuoles in regions of the frontal and temporal lobe. Methods: Changes in density of histological features across cortical gyri were studied in 10 sporadic cases of FTLD-TDP using quantitative methods and polynomial curve fitting. Results: Our data suggest that laminar neuropathology in sporadic FTLD-TDP is highly variable. Most commonly, neuronal cytoplasmic inclusions, dystrophic neurites and vacuolation were abundant in the upper laminae and glial inclusions, neuronal intranuclear inclusions, abnormally enlarged neurones, and glial cell nuclei in the lower laminae. TDP-43-immunoreactive inclusions affected more of the cortical profile in longer duration cases; their distribution varied with disease subtype, but was unrelated to Braak tangle score. Different TDP-43-immunoreactive inclusions were not spatially correlated. Conclusions: Laminar distribution of pathological features in 10 sporadic cases of FTLD-TDP is heterogeneous and may be accounted for, in part, by disease subtype and disease duration. In addition, the feedforward and feedback cortico-cortical connections may be compromised in FTLD-TDP. © 2012 The Authors. Neuropathology and Applied Neurobiology © 2012 British Neuropathological Society.
Resumo:
Although crisp data are fundamentally indispensable for determining the profit Malmquist productivity index (MPI), the observed values in real-world problems are often imprecise or vague. These imprecise or vague data can be suitably characterized with fuzzy and interval methods. In this paper, we reformulate the conventional profit MPI problem as an imprecise data envelopment analysis (DEA) problem, and propose two novel methods for measuring the overall profit MPI when the inputs, outputs, and price vectors are fuzzy or vary in intervals. We develop a fuzzy version of the conventional MPI model by using a ranking method, and solve the model with a commercial off-the-shelf DEA software package. In addition, we define an interval for the overall profit MPI of each decision-making unit (DMU) and divide the DMUs into six groups according to the intervals obtained for their overall profit efficiency and MPIs. We also present two numerical examples to demonstrate the applicability of the two proposed models and exhibit the efficacy of the procedures and algorithms. © 2011 Elsevier Ltd.
Resumo:
Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models.
Resumo:
The postgenomic era, as manifest, inter alia, by proteomics, offers unparalleled opportunities for the efficient discovery of safe, efficacious, and novel subunit vaccines targeting a tranche of modern major diseases. A negative corollary of this opportunity is the risk of becoming overwhelmed by this embarrassment of riches. Informatics techniques, working to address issues of both data management and through prediction to shortcut the experimental process, can be of enormous benefit in leveraging the proteomic revolution.In this disquisition, we evaluate proteomic approaches to the discovery of subunit vaccines, focussing on viral, bacterial, fungal, and parasite systems. We also adumbrate the impact that proteomic analysis of host-pathogen interactions can have. Finally, we review relevant methods to the prediction of immunome, with special emphasis on quantitative methods, and the subcellular localization of proteins within bacteria.
Resumo:
This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.
Resumo:
This study draws upon effectuation and causation as examples of planning-based and flexible decision-making logics, and investigates dynamics in the use of both logics. The study applies a longitudinal process research approach to investigate strategic decision-making in new venture creation over time. Combining qualitative and quantitative methods, we analyze 385 decision events across nine technology-based ventures. Our observations suggest a hybrid perspective on strategic decision-making, demonstrating how effectuation and causation logics are combined, and how entrepreneurs’ emphasis on these logics shifts and re-shifts over time. We induce a dynamic model which extends the literature on strategic decision-making in venture creation.
Resumo:
A cikkben a kooperatív játékelmélet fogalmait alkalmazzuk egy ellátási lánc esetében. Az ostorcsapás-hatás elemeit egy beszállító-termelő ellátási láncban ragadjuk meg egy Arrow-Karlin típusú modellben lineáris készletezési és konvex termelési költség mellett. Feltételezzük, hogy mindkét vállalat minimalizálja a fontosabb költségeit. Két működési rendszert hasonlítunk össze: egy hierarchikus döntéshozatali rendszert, amikor először a termelő, majd a beszállító optimalizálja helyzetét, majd egy centralizált (kooperatív) modellt, amikor a vállalatok az együttes költségüket minimalizálják. A kérdés úgy merül fel, hogy a csökkentett ostorcsapás-hatás esetén hogyan osszák meg a részvevők ebben a transzferálható hasznosságú kooperatív játékban. = In this paper we apply cooperative game theory concepts to analyze supply chains. The bullwhip effect in a two-stage supply chain (supplier-manufacturer) in the framework of the Arrow-Karlin model with linear-convex cost functions is considered. It is assumed that both firms minimize their relevant costs, and two cases are examined: the supplier and the manufacturer minimize their relevant costs in a decentralized and in a centralized (cooperative) way. The question of how to share the savings of the decreased bullwhip effect in the centralized (cooperative) model is answered by transferable utility cooperative game theory tools.
Resumo:
A dolgozat célja egy vállalati gyakorlatból származó eset elemzése. Egy könyvkiadót tekintünk. A kiadó kapcsolatban van kis- és nagykereskedőkkel, valamint a fogyasztók egy csoportjával is vannak kapcsolatai. A könyvkiadók projekt rendszerben működnek. A kiadó azzal a problémával szembesül, hogy hogyan ossza el egy frissen kiadott és nyomtatott könyv példányszámait a kis- és nagykereskedők között, valamint mekkora példányszámot tároljon maga a fogyasztók közvetlen kielégítésére. A kiadóról feltételezzük, hogy visszavásárlási szerződése van a kereskedőkkel. A könyv iránti kereslet nem ismert, de becsülhető. A kis- és nagykereskedők maximalizálják a nyereségüket. = The aim of the paper is to analyze a practical real world problem. A publishing house is given. The publishing firm has contacts to a number of wholesaler / retailer enterprises and direct contact to customers to satisfy the market demand. The book publishers work in a project industry. The publisher faces with the problem how to allocate the stocks of a given, newly published book to the wholesaler and retailer, and to hold some copies to satisfy the customers direct from the publisher. The publisher has a buyback option. The distribution of the demand is unknown, but it can be estimated. The wholesaler / retailer maximize the profits. The problem can be modeled as a one-warehouse and N-retailer supply chain with not identical demand distribution. The model can be transformed in a game theory problem. It is assumed that the demand distribution follows a Poisson distribution.
Resumo:
A new correlation scheme (leading to a special equilibrium called “soft” correlated equilibrium) is introduced for finite games. After randomization over the outcome space, players have the choice either to follow the recommendation of an umpire blindly or freely choose some other action except the one suggested. This scheme can lead to Pareto-better outcomes than the simple extension introduced by [Moulin, H., Vial, J.-P., 1978. Strategically zero-sum games: the class of games whose completely mixed equilibria cannot be improved upon. International Journal of Game Theory 7, 201–221]. The informational and interpretational aspects of soft correlated equilibria are also discussed in detail. The power of the generalization is illustrated in the prisoners’s dilemma and a congestion game.
Resumo:
A szerzők tanulmányának középpontjában a közvetlen külföldi befektetések és a korrupció kapcsolata áll. Feltételezésük az, hogy a közvetlen külföldi befektetők a kevésbé korrupt országokat kedvelik, mivel a korrupció egy további kockázati tényezőt jelent a befektetők számára, amely növelheti a befektetések költségeit. Megítélésük szerint ezt kvantitatív módszerekkel érdemes vizsgálni, így elemzésük során 79 országot vizsgálnak meg tíz évre vonatkozó átlagokkal a Gretl-program és az OLS becslőfüggvény segítségével. Több modell lefuttatása után azt az eredményt kapták, hogy a közvetlen külföldi befektetők döntéseiben a korrupció szignifikáns tényező, a két változó között negatív korrelációt figyeltek meg. / === / The study focuses on the connection of Foreign Direct Investment and corruption. The authors assume that investors prefer countries where corruption level is lower, as corruption an additional risk factor that might increase the cost of investment. They believe that the best way to prove the previous statement if they use quantitative methods, so they set up a model where 79 countries are tested for 10 years averages, with the help of the Gretl and OLS estimator. After running several models their finding was that corruption is a significant factor in the decisions of foreign investors, and there is a negative correlation between corruption and FDI.