13 resultados para Input-Output Model
em Helda - Digital Repository of University of Helsinki
Resumo:
Tutkimuksen tavoitteena on tuottaa uutta tietoa Suomen kansantalouden rakenteesta ja lyhyen aikavälin kehityksestä 1920- ja 1930-luvulla. Tutkimus toteutettiin laatimalla kansantaloutta kuvaava panos-tuotostaulu vuodelle 1928 sekä sen laajennus, panos-tuotosmalli. Aineiston avulla kuvataan kansantalouden rakenteellisia riippuvuuksia, tuotannon avaintoimialoja sekä näiden vaikutusta kansantalouteen. Lisäksi tutkimuksessa tarkastellaan kansantalouden tuontiriippuvuutta sekä tuontitullien vaikutusta hintoihin 1930-luvun laman aikana. Tutkimuksen perusteella voitiin identifioida Suomen kansantalouden avaintoimialat vuonna 1928: maatalous, metsätalous, elintarviketeollisuus, puuteollisuus, paperiteollisuus ja rakennustoiminta. Erityisesti elintarviketeollisuuden vahva rooli kansantaloudessa oli kenties yllättävää, erityisesti kun huomioidaan kuinka vähän toimiala on saanut huomiota osakseen taloushistorian tutkimuksessa. Tutkimus osoitti, että Suomen vienti oli pääomavaltaisempaa kuin tuonti. Vaikka tämän tuloksen tulkinta on varauksellinen, tutkimus pystyi osoittamaan ja kvantifioimaan toimialojen työ- ja pääomapanoksen osuuden tuotoksesta yksityiskohtaisesti. Panos-tuotosmallilla arvioitiin puuteollisuuden, paperiteollisuuden ja rakennustoiminnan ajanjaksona 1928-32 tapahtuneen loppukäytön muutoksen vaikutusta kansantalouteen. Merkittävä havainto on, että rakennustoiminnan loppukäytön muutoksella oli erittäin suuri kasvua vähentävä vaikutus koko kansantaloudessa. Talonrakennusinvestointien romahtaminen aiheutti lähes 13 prosentin tuotannon laskun kansantaloudessa. Vaikutus oli jopa suurempi kuin puuteollisuuden viennin romahtamisen. Tulokset osoittavat toisaalta, että yksityisen kulutuksen merkitys kansantaloudelle oli erittäin vahva. Esimerkiksi puuteollisuuden viennin romahtaminen aiheutti yli 4 % tuotannon vähenemisen mutta huomioitaessa mallissa myös yksityisen kulutuksen väheneminen, oli kokonaisvaikutus yli 10 %. Yksityisen kulutuksen huomioiminen mallissa siis yli kaksinkertaisti toimialojen vaikutukset kansantalouteen. Tulokset vahvistivat aiemmissa tutkimuksissa esitettyjä johtopäätöksiä tullipolitiikasta ja osoittivat maatalouteen läheisesti liittyvän elintarviketeollisuuden olleen eniten suojeltu toimiala kansantaloudessa. Muut kotimarkkinoiden toimialat eivät kuitenkaan hyötyneet tullipolitiikasta lamakauden aikana. Panos-tuotoshintamallilla osoitettiin, ettei tullipolitiikka ollut niin onnistunutta kuin aikalaistutkimuksissa väitettiin, vaan tullit korkeintaan pystyivät hidastamaan hintojen alenemista. Tutkimuksen liitteenä esitetään kaikki keskeiset Suomen kansantaloutta vuonna 1928 kuvaavat tilastolliset taulukot, mukaan lukien käyttö- ja tarjontataulukot, panos-tuotostaulukot, panoskertoimet, Leontiefin käänteismatriisi sekä työ- ja pääomapanoskertoimet.
Resumo:
Agriculture is an economic activity that heavily relies on the availability of natural resources. Through its role in food production agriculture is a major factor affecting public welfare and health, and its indirect contribution to gross domestic product and employment is significant. Agriculture also contributes to numerous ecosystem services through management of rural areas. However, the environmental impact of agriculture is considerable and reaches far beyond the agroecosystems. The questions related to farming for food production are, thus, manifold and of great public concern. Improving environmental performance of agriculture and sustainability of food production, sustainabilizing food production, calls for application of wide range of expertise knowledge. This study falls within the field of agro-ecology, with interphases to food systems and sustainability research and exploits the methods typical of industrial ecology. The research in these fields extends from multidisciplinary to interdisciplinary and transdisciplinary, a holistic approach being the key tenet. The methods of industrial ecology have been applied extensively to explore the interaction between human economic activity and resource use. Specifically, the material flow approach (MFA) has established its position through application of systematic environmental and economic accounting statistics. However, very few studies have applied MFA specifically to agriculture. The MFA approach was used in this thesis in such a context in Finland. The focus of this study is the ecological sustainability of primary production. The aim was to explore the possibilities of assessing ecological sustainability of agriculture by using two different approaches. In the first approach the MFA-methods from industrial ecology were applied to agriculture, whereas the other is based on the food consumption scenarios. The two approaches were used in order to capture some of the impacts of dietary changes and of changes in production mode on the environment. The methods were applied at levels ranging from national to sector and local levels. Through the supply-demand approach, the viewpoint changed between that of food production to that of food consumption. The main data sources were official statistics complemented with published research results and expertise appraisals. MFA approach was used to define the system boundaries, to quantify the material flows and to construct eco-efficiency indicators for agriculture. The results were further elaborated for an input-output model that was used to analyse the food flux in Finland and to determine its relationship to the economy-wide physical and monetary flows. The methods based on food consumption scenarios were applied at regional and local level for assessing feasibility and environmental impacts of relocalising food production. The approach was also used for quantification and source allocation of greenhouse gas (GHG) emissions of primary production. GHG assessment provided, thus, a means of crosschecking the results obtained by using the two different approaches. MFA data as such or expressed as eco-efficiency indicators, are useful in describing the overall development. However, the data are not sufficiently detailed for identifying the hot spots of environmental sustainability. Eco-efficiency indicators should not be bluntly used in environmental assessment: the carrying capacity of the nature, the potential exhaustion of non-renewable natural resources and the possible rebound effect need also to be accounted for when striving towards improved eco-efficiency. The input-output model is suitable for nationwide economy analyses and it shows the distribution of monetary and material flows among the various sectors. Environmental impact can be captured only at a very general level in terms of total material requirement, gaseous emissions, energy consumption and agricultural land use. Improving environmental performance of food production requires more detailed and more local information. The approach based on food consumption scenarios can be applied at regional or local scales. Based on various diet options the method accounts for the feasibility of re-localising food production and environmental impacts of such re-localisation in terms of nutrient balances, gaseous emissions, agricultural energy consumption, agricultural land use and diversity of crop cultivation. The approach is applicable anywhere, but the calculation parameters need to be adjusted so as to comply with the specific circumstances. The food consumption scenario approach, thus, pays attention to the variability of production circumstances, and may provide some environmental information that is locally relevant. The approaches based on the input-output model and on food consumption scenarios represent small steps towards more holistic systemic thinking. However, neither one alone nor the two together provide sufficient information for sustainabilizing food production. Environmental performance of food production should be assessed together with the other criteria of sustainable food provisioning. This requires evaluation and integration of research results from many different disciplines in the context of a specified geographic area. Foodshed area that comprises both the rural hinterlands of food production and the population centres of food consumption is suggested to represent a suitable areal extent for such research. Finding a balance between the various aspects of sustainability is a matter of optimal trade-off. The balance cannot be universally determined, but the assessment methods and the actual measures depend on what the bottlenecks of sustainability are in the area concerned. These have to be agreed upon among the actors of the area
Resumo:
The paradigm of computational vision hypothesizes that any visual function -- such as the recognition of your grandparent -- can be replicated by computational processing of the visual input. What are these computations that the brain performs? What should or could they be? Working on the latter question, this dissertation takes the statistical approach, where the suitable computations are attempted to be learned from the natural visual data itself. In particular, we empirically study the computational processing that emerges from the statistical properties of the visual world and the constraints and objectives specified for the learning process. This thesis consists of an introduction and 7 peer-reviewed publications, where the purpose of the introduction is to illustrate the area of study to a reader who is not familiar with computational vision research. In the scope of the introduction, we will briefly overview the primary challenges to visual processing, as well as recall some of the current opinions on visual processing in the early visual systems of animals. Next, we describe the methodology we have used in our research, and discuss the presented results. We have included some additional remarks, speculations and conclusions to this discussion that were not featured in the original publications. We present the following results in the publications of this thesis. First, we empirically demonstrate that luminance and contrast are strongly dependent in natural images, contradicting previous theories suggesting that luminance and contrast were processed separately in natural systems due to their independence in the visual data. Second, we show that simple cell -like receptive fields of the primary visual cortex can be learned in the nonlinear contrast domain by maximization of independence. Further, we provide first-time reports of the emergence of conjunctive (corner-detecting) and subtractive (opponent orientation) processing due to nonlinear projection pursuit with simple objective functions related to sparseness and response energy optimization. Then, we show that attempting to extract independent components of nonlinear histogram statistics of a biologically plausible representation leads to projection directions that appear to differentiate between visual contexts. Such processing might be applicable for priming, \ie the selection and tuning of later visual processing. We continue by showing that a different kind of thresholded low-frequency priming can be learned and used to make object detection faster with little loss in accuracy. Finally, we show that in a computational object detection setting, nonlinearly gain-controlled visual features of medium complexity can be acquired sequentially as images are encountered and discarded. We present two online algorithms to perform this feature selection, and propose the idea that for artificial systems, some processing mechanisms could be selectable from the environment without optimizing the mechanisms themselves. In summary, this thesis explores learning visual processing on several levels. The learning can be understood as interplay of input data, model structures, learning objectives, and estimation algorithms. The presented work adds to the growing body of evidence showing that statistical methods can be used to acquire intuitively meaningful visual processing mechanisms. The work also presents some predictions and ideas regarding biological visual processing.
Resumo:
Research on corporate responsibility has traditionally focused on the responsibilities of companies within their corporate boundaries only. Yet this view is challenged today as more and more companies face the situation in which the environmental and social performance of their suppliers, distributors, industry or other associated partners impacts on their sales performance and brand equity. Simultaneously, policy-makers have taken up the discussion on corporate responsibility from the perspective of globalisation, in particular of global supply chains. The category of selecting and evaluating suppliers has also entered the field of environmental reporting. Companies thus need to tackle their responsibility in collaboration with different partners. The aim of the thesis is to further the understanding of collaboration and corporate environmental responsibility beyond corporate boundaries. Drawing on the fields of supply chain management and industrial ecology, the thesis sets out to investigate inter-firm collaboration on three different levels, between the company and its stakeholders, in the supply chain, and in the demand network of a company. The thesis is comprised of four papers: Paper A discusses the use of different research approaches in logistics and supply chain management. Paper B introduces the study on collaboration and corporate environmental responsibility from a focal company perspective, looking at the collaboration of companies with their stakeholders, and the salience of these stakeholders. Paper C widens this perspective to an analysis on the supply chain level. The focus here is not only beyond corporate boundaries, but also beyond direct supplier and customer interfaces in the supply chain. Paper D then extends the analysis to the demand network level, taking into account the input-output, competitive and regulatory environments, in which a company operates. The results of the study broaden the view of corporate responsibility. By applying this broader view, different types of inter-firm collaboration can be highlighted. Results also show how environmental demand is extended in the supply chain regardless of the industry background of the company.
Resumo:
Purpose: This study investigates boards of directors in small firms and explores the link between board effectiveness and the composition, roles and working styles of the boards. Design/methodology/approach: The study analyses data from a telephone survey of boards in 45 small firms. The survey included both the CEO and the chairperson of the board. Findings: The study identifies three groups of small firms: ‘paperboards’, ‘professional boards’, and ‘management lead’ boards. Results show that board composition, board roles and board working style influence board effectiveness in small firms. Research limitations/implications: Although the present study has found a link between board effectiveness and the role, composition and working style of boards of small firms, other potentially influential factors are also worthy of investigation; for example, the personal characteristics of the individuals involved, generational factors in family firms, and the situational circumstances of various firms. Practical implications: The study reveals that, in practice, the management team and the board are substantially intertwined in small firms. Originality/value: The main contributions are that the study explores how boards in small firms actually function and gives a detailed account of their composition and roles.More insight into this issue is important given the overemphasis within the governance literature on input-output studies using samples of large publiclylisted firms.
Resumo:
Human activities extract and displace different substances and materials from the earth s crust, thus causing various environmental problems, such as climate change, acidification and eutrophication. As problems have become more complicated, more holistic measures that consider the origins and sources of pollutants have been called for. Industrial ecology is a field of science that forms a comprehensive framework for studying the interactions between the modern technological society and the environment. Industrial ecology considers humans and their technologies to be part of the natural environment, not separate from it. Industrial operations form natural systems that must also function as such within the constraints set by the biosphere. Industrial symbiosis (IS) is a central concept of industrial ecology. Industrial symbiosis studies look at the physical flows of materials and energy in local industrial systems. In an ideal IS, waste material and energy are exchanged by the actors of the system, thereby reducing the consumption of virgin material and energy inputs and the generation of waste and emissions. Companies are seen as part of the chains of suppliers and consumers that resemble those of natural ecosystems. The aim of this study was to analyse the environmental performance of an industrial symbiosis based on pulp and paper production, taking into account life cycle impacts as well. Life Cycle Assessment (LCA) is a tool for quantitatively and systematically evaluating the environmental aspects of a product, technology or service throughout its whole life cycle. Moreover, the Natural Step Sustainability Principles formed a conceptual framework for assessing the environmental performance of the case study symbiosis (Paper I). The environmental performance of the case study symbiosis was compared to four counterfactual reference scenarios in which the actors of the symbiosis operated on their own. The research methods used were process-based life cycle assessment (LCA) (Papers II and III) and hybrid LCA, which combines both process and input-output LCA (Paper IV). The results showed that the environmental impacts caused by the extraction and processing of the materials and the energy used by the symbiosis were considerable. If only the direct emissions and resource use of the symbiosis had been considered, less than half of the total environmental impacts of the system would have been taken into account. When the results were compared with the counterfactual reference scenarios, the net environmental impacts of the symbiosis were smaller than those of the reference scenarios. The reduction in environmental impacts was mainly due to changes in the way energy was produced. However, the results are sensitive to the way the reference scenarios are defined. LCA is a useful tool for assessing the overall environmental performance of industrial symbioses. It is recommended that in addition to the direct effects, the upstream impacts should be taken into account as well when assessing the environmental performance of industrial symbioses. Industrial symbiosis should be seen as part of the process of improving the environmental performance of a system. In some cases, it may be more efficient, from an environmental point of view, to focus on supply chain management instead.
Resumo:
Costs of purchasing new piglets and of feeding them until slaughter are the main variable expenditures in pig fattening. They both depend on slaughter intensity, the nature of feeding patterns and the technological constraints of pig fattening, such as genotype. Therefore, it is of interest to examine the effect of production technology and changes in input and output prices on feeding and slaughter decisions. This study examines the problem by using a dynamic programming model that links genetic characteristics of a pig to feeding decisions and the timing of slaughter and takes into account how these jointly affect the quality-adjusted value of a carcass. The model simulates the growth mechanism of a pig under optional feeding and slaughter patterns and then solves the optimal feeding and slaughter decisions recursively. The state of nature and the genotype of a pig are known in the analysis. The main contribution of this study is the dynamic approach that explicitly takes into account carcass quality while simultaneously optimising feeding and slaughter decisions. The method maximises the internal rate of return to the capacity unit. Hence, the results can have vital impact on competitiveness of pig production, which is known to be quite capital-intensive. The results suggest that producer can significantly benefit from improvements in the pig's genotype, because they improve efficiency of pig production. The annual benefits from obtaining pigs of improved genotype can be more than €20 per capacity unit. The annual net benefits of animal breeding to pig farms can also be considerable. Animals of improved genotype can reach optimal slaughter maturity quicker and produce leaner meat than animals of poor genotype. In order to fully utilise the benefits of animal breeding, the producer must adjust feeding and slaughter patterns on the basis of genotype. The results suggest that the producer can benefit from flexible feeding technology. The flexible feeding technology segregates pigs into groups according to their weight, carcass leanness, genotype and sex and thereafter optimises feeding and slaughter decisions separately for these groups. Typically, such a technology provides incentives to feed piglets with protein-rich feed such that the genetic potential to produce leaner meat is fully utilised. When the pig approaches slaughter maturity, the share of protein-rich feed in the diet gradually decreases and the amount of energy-rich feed increases. Generally, the optimal slaughter weight is within the weight range that pays the highest price per kilogram of pig meat. The optimal feeding pattern and the optimal timing of slaughter depend on price ratios. Particularly, an increase in the price of pig meat provides incentives to increase the growth rates up to the pig's biological maximum by increasing the amount of energy in the feed. Price changes and changes in slaughter premium can also have large income effects. Key words: barley, carcass composition, dynamic programming, feeding, genotypes, lean, pig fattening, precision agriculture, productivity, slaughter weight, soybeans
Resumo:
Various reasons, such as ethical issues in maintaining blood resources, growing costs, and strict requirements for safe blood, have increased the pressure for efficient use of resources in blood banking. The competence of blood establishments can be characterized by their ability to predict the volume of blood collection to be able to provide cellular blood components in a timely manner as dictated by hospital demand. The stochastically varying clinical need for platelets (PLTs) sets a specific challenge for balancing supply with requests. Labour has been proven a primary cost-driver and should be managed efficiently. International comparisons of blood banking could recognize inefficiencies and allow reallocation of resources. Seventeen blood centres from 10 countries in continental Europe, Great Britain, and Scandinavia participated in this study. The centres were national institutes (5), parts of the local Red Cross organisation (5), or integrated into university hospitals (7). This study focused on the departments of blood component preparation of the centres. The data were obtained retrospectively by computerized questionnaires completed via Internet for the years 2000-2002. The data were used in four original articles (numbered I through IV) that form the basis of this thesis. Non-parametric data envelopment analysis (DEA, II-IV) was applied to evaluate and compare the relative efficiency of blood component preparation. Several models were created using different input and output combinations. The focus of comparisons was on the technical efficiency (II-III) and the labour efficiency (I, IV). An empirical cost model was tested to evaluate the cost efficiency (IV). Purchasing power parities (PPP, IV) were used to adjust the costs of the working hours and to make the costs comparable among countries. The total annual number of whole blood (WB) collections varied from 8,880 to 290,352 in the centres (I). Significant variation was also observed in the annual volume of produced red blood cells (RBCs) and PLTs. The annual number of PLTs produced by any method varied from 2,788 to 104,622 units. In 2002, 73% of all PLTs were produced by the buffy coat (BC) method, 23% by aphaeresis and 4% by the platelet-rich plasma (PRP) method. The annual discard rate of PLTs varied from 3.9% to 31%. The mean discard rate (13%) remained in the same range throughout the study period and demonstrated similar levels and variation in 2003-2004 according to a specific follow-up question (14%, range 3.8%-24%). The annual PLT discard rates were, to some extent, associated with production volumes. The mean RBC discard rate was 4.5% (range 0.2%-7.7%). Technical efficiency showed marked variation (median 60%, range 41%-100%) among the centres (II). Compared to the efficient departments, the inefficient departments used excess labour resources (and probably) production equipment to produce RBCs and PLTs. Technical efficiency tended to be higher when the (theoretical) proportion of lost WB collections (total RBC+PLT loss) from all collections was low (III). The labour efficiency varied remarkably, from 25% to 100% (median 47%) when working hours were the only input (IV). Using the estimated total costs as the input (cost efficiency) revealed an even greater variation (13%-100%) and overall lower efficiency level compared to labour only as the input. In cost efficiency only, the savings potential (observed inefficiency) was more than 50% in 10 departments, whereas labour and cost savings potentials were both more than 50% in six departments. The association between department size and efficiency (scale efficiency) could not be verified statistically in the small sample. In conclusion, international evaluation of the technical efficiency in component preparation departments revealed remarkable variation. A suboptimal combination of manpower and production output levels was the major cause of inefficiency, and the efficiency did not directly relate to production volume. Evaluation of the reasons for discarding components may offer a novel approach to study efficiency. DEA was proven applicable in analyses including various factors as inputs and outputs. This study suggests that analytical models can be developed to serve as indicators of technical efficiency and promote improvements in the management of limited resources. The work also demonstrates the importance of integrating efficiency analysis into international comparisons of blood banking.
Resumo:
Phosphorus is a nutrient needed in crop production. While boosting crop yields it may also accelerate eutrophication in the surface waters receiving the phosphorus runoff. The privately optimal level of phosphorus use is determined by the input and output prices, and the crop response to phosphorus. Socially optimal use also takes into account the impact of phosphorus runoff on water quality. Increased eutrophication decreases the economic value of surface waters by Deteriorating fish stocks, curtailing the potential for recreational activities and by increasing the probabilities of mass algae blooms. In this dissertation, the optimal use of phosphorus is modelled as a dynamic optimization problem. The potentially plant available phosphorus accumulated in soil is treated as a dynamic state variable, the control variable being the annual phosphorus fertilization. For crop response to phosphorus, the state variable is more important than the annual fertilization. The level of this state variable is also a key determinant of the runoff of dissolved, reactive phosphorus. Also the loss of particulate phosphorus due to erosion is considered in the thesis, as well as its mitigation by constructing vegetative buffers. The dynamic model is applied for crop production on clay soils. At the steady state, the analysis focuses on the effects of prices, damage parameterization, discount rate and soil phosphorus carryover capacity on optimal steady state phosphorus use. The economic instruments needed to sustain the social optimum are also analyzed. According to the results the economic incentives should be conditioned on soil phosphorus values directly, rather than on annual phosphorus applications. The results also emphasize the substantial effects the differences in varying discount rates of the farmer and the social planner have on optimal instruments. The thesis analyzes the optimal soil phosphorus paths from its alternative initial levels. It also examines how erosion susceptibility of a parcel affects these optimal paths. The results underline the significance of the prevailing soil phosphorus status on optimal fertilization levels. With very high initial soil phosphorus levels, both the privately and socially optimal phosphorus application levels are close to zero as the state variable is driven towards its steady state. The soil phosphorus processes are slow. Therefore, depleting high phosphorus soils may take decades. The thesis also presents a methodologically interesting phenomenon in problems of maximizing the flow of discounted payoffs. When both the benefits and damages are related to the same state variable, the steady state solution may have an interesting property, under very general conditions: The tail of the payoffs of the privately optimal path as well as the steady state may provide a higher social welfare than the respective tail of the socially optimal path. The result is formalized and an applied to the created framework of optimal phosphorus use.
Resumo:
This thesis studies empirically whether measurement errors in aggregate production statistics affect sentiment and future output. Initial announcements of aggregate production are subject to measurement error, because many of the data required to compile the statistics are produced with a lag. This measurement error can be gauged as the difference between the latest revised statistic and its initial announcement. Assuming aggregate production statistics help forecast future aggregate production, these measurement errors are expected to affect macroeconomic forecasts. Assuming agents’ macroeconomic forecasts affect their production choices, these measurement errors should affect future output through sentiment. This thesis is primarily empirical, so the theoretical basis, strategic complementarity, is discussed quite briefly. However, it is a model in which higher aggregate production increases each agent’s incentive to produce. In this circumstance a statistical announcement which suggests aggregate production is high would increase each agent’s incentive to produce, thus resulting in higher aggregate production. In this way the existence of strategic complementarity provides the theoretical basis for output fluctuations caused by measurement mistakes in aggregate production statistics. Previous empirical studies suggest that measurement errors in gross national product affect future aggregate production in the United States. Additionally it has been demonstrated that measurement errors in the Index of Leading Indicators affect forecasts by professional economists as well as future industrial production in the United States. This thesis aims to verify the applicability of these findings to other countries, as well as study the link between measurement errors in gross domestic product and sentiment. This thesis explores the relationship between measurement errors in gross domestic production and sentiment and future output. Professional forecasts and consumer sentiment in the United States and Finland, as well as producer sentiment in Finland, are used as the measures of sentiment. Using statistical techniques it is found that measurement errors in gross domestic product affect forecasts and producer sentiment. The effect on consumer sentiment is ambiguous. The relationship between measurement errors and future output is explored using data from Finland, United States, United Kingdom, New Zealand and Sweden. It is found that measurement errors have affected aggregate production or investment in Finland, United States, United Kingdom and Sweden. Specifically, it was found that overly optimistic statistics announcements are associated with higher output and vice versa.
Resumo:
We present a search for standard model Higgs boson production in association with a W boson in proton-antiproton collisions at a center of mass energy of 1.96 TeV. The search employs data collected with the CDF II detector that correspond to an integrated luminosity of approximately 1.9 inverse fb. We select events consistent with a signature of a single charged lepton, missing transverse energy, and two jets. Jets corresponding to bottom quarks are identified with a secondary vertex tagging method, a jet probability tagging method, and a neural network filter. We use kinematic information in an artificial neural network to improve discrimination between signal and background compared to previous analyses. The observed number of events and the neural network output distributions are consistent with the standard model background expectations, and we set 95% confidence level upper limits on the production cross section times branching fraction ranging from 1.2 to 1.1 pb or 7.5 to 102 times the standard model expectation for Higgs boson masses from 110 to $150 GeV/c^2, respectively.
Resumo:
This research investigates the impacts of agricultural market liberalization on food security in developing countries and it evaluates the supply perspective of food security. This research theme is applied on the agricultural sector in Kenya and in Zambia by studying the role policies played in the maize sub-sector. An evaluation of selected policies introduced at the beginning of the 1980s is made, as well as an assessment of whether those policies influenced maize output. A theoretical model of agricultural production is then formulated to reflect cereal production in a developing country setting. This study begins with a review of the general framework and the aims of the structural adjustment programs and proceeds to their application in the maize sector in Kenya and Zambia. A literature review of the supply and demand synthesis of food security is presented with examples from various developing countries. Contrary to previous studies on food security, this study assesses two countries with divergent economic orientations. Agricultural sector response to economic and institutional policies in different settings is also evaluated. Finally, a dynamic time series econometric model is applied to assess the effects of policy on maize output. The empirical findings suggest a weak policy influence on maize output, but the precipitation and acreage variables stand out as core determinants of maize output. The policy dimension of acreage and how markets influence it is not discussed at length in this study. Due to weak land rights and tenure structures in these countries, the direct impact of policy change on land markets cannot be precisely measured. Recurring government intervention during the structural policy implementation period impeded efficient functioning of input and output markets, particularly in Zambia. Input and output prices of maize and fertilizer responded more strongly in Kenya than in Zambia, where the state often ceded to public pressure by revoking pertinent policy measures. These policy interpretations are based on the response of policy variables which are more responsive in Kenya than in Zambia. According the obtained regression results, agricultural markets in general, and the maize sub-sector in particular, responded more positively to implemented policies in Kenya, than in Zambia, which supported a more socialist economic system. It is observed in these results that in order for policies to be effective, sector and regional dimensions need to be considered. The regional and sector dimensions were not taken into account in the formulation and implementation of structural adjustment policies in the 1980s. It can be noted that countries with vibrant economic structures and institutions fared better than those which had a firm, socially founded system.
Resumo:
An extensive electricity transmission network facilitates electricity trading between Finland, Sweden, Norway and Denmark. Currently most of the area's power generation is traded at NordPool, where the trading volumes have steadily increased since the early 1990's, when the exchange was founded. The Nordic electricity is expected to follow the current trend and further integrate with the other European electricity markets. Hydro power is the source for roughly a half of the supply in the Nordic electricity market and most of the hydro is generated in Norway. The dominating role of hydro power distinguishes the Nordic electricity market from most of the other market places. Production of hydro power varies mainly due to hydro reservoirs and demand for electricity. Hydro reservoirs are affected by water inflows that differ each year. The hydro reservoirs explain remarkably the behaviour of the Nordic electricity markets. Therefore among others, Kauppi and Liski (2008) have developed a model that analyzes the behaviour of the markets using hydro reservoirs as explanatory factors. Their model includes, for example, welfare loss due to socially suboptimal hydro reservoir usage, socially optimal electricity price, hydro reservoir storage and thermal reservoir storage; that are referred as outcomes. However, the model does not explain the real market condition but rather an ideal situation. In the model the market is controlled by one agent, i.e. one agent controls all the power generation reserves; it is referred to as a socially optimal strategy. Article by Kauppi and Liski (2008) includes an assumption where an individual agent has a certain fraction of market power, e.g. 20 % or 30 %. In order to maintain the focus of this thesis, this part of their paper is omitted. The goal of this thesis is two-fold. Firstly we expand the results from the socially optimal strategy for years 2006-08, as the earlier study finishes in 2005. The second objective is to improve on the methods from the previous study. This thesis results several outcomes (SPOT-price and welfare loss, etc.) due to socially optimal actions. Welfare loss is interesting as it describes the inefficiency of the market. SPOT-price is an important output for the market participants as it often has an effect on end users' electricity bills. Another function is to modify and try to improve the model by means of using more accurate input data, e.g. by considering pollution trade rights effect on input data. After modifications to the model, new welfare losses are calculated and compared with the same results before the modifications. The hydro reservoir has the higher explanatory significance in the model followed by thermal power. In Nordic markets, thermal power reserves are mostly nuclear power and other thermal sources (coal, natural gas, oil, peat). It can be argued that hydro and thermal reservoirs determine electricity supply. Roughly speaking, the model takes into account electricity demand and supply, and several parameters related to them (water inflow, oil price, etc.), yielding finally the socially optimal outcomes. The author of this thesis is not aware of any similar model being tested before. There have been some other studies that are close to the Kauppi and Liski (2008) model, but those have a somewhat different focus. For example, a specific feature in the model is the focus on long-run capacity usage that differs from the previous studies on short-run market power. The closest study to the model is from California's wholesale electricity markets that, however, uses different methodology. Work is constructed as follows.