994 resultados para Production forces
Resumo:
There has been increasing demand to provide higher beam intensity and high enough beam energy for heavy ion accelerator and some other applications, which has driven electron cyclotron resonance (ECR) ion source to produce higher charge state ions with higher beam intensity. One of development trends for highly charged ECR ion source is to build new generation ECR sources by utilization of superconducting magnet technology. SECRAL (superconducting ECR ion source with advanced design in Lanzhou) was successfully built to produce intense beams of highly charged ion for Heavy Ion Research Facility in Lanzhou (HIRFL). The ion source has been optimized to be operated at 28 GHz for its maximum performance. The superconducting magnet confinement configuration of the ion source consists of three axial solenoid coils and six sextupole coils with a cold iron structure as field booster and clamping. An innovative design of SECRAL is that the three axial solenoid coils are located inside of the sextupole bore in order to reduce the interaction forces between the sextupole coils and the solenoid coils. For 28 GHz operation, the magnet assembly can produce peak mirror fields on axis of 3.6 T at injection, 2.2 T at extraction, and a radial sextupole field of 2.0 T at plasma chamber wall. During the commissioning phase at 18 GHz with a stainless steel chamber, tests with various gases and some metals have been conducted with microwave power less than 3.5 kW by two 18 GHz rf generators. It demonstrates the performance is very promising. Some record ion beam intensities have been produced, for instance, 810 e mu A of O7+, 505 e mu A of Xe20+ 306 e mu A of Xe27+, and so on. The effect of the magnetic field configuration on the ion source performance has been studied experimentally. SECRAL has been put into operation to provide highly charged ion beams for HIRFL facility since May 2007.
Resumo:
An electrolytic cell for Aluminum production contains molten metal subject to high currents and magnetic flux density. The interaction between these two fields creates electromagnetic forces within the liquid metal and can generate oscillations of the fluid similar to the waves at the free surface of oceans and rivers. The study of this phenomenon requires the simulation of the current density field, of the magnetic flux density field and the solution of the equations of motion of the liquid mass. An attempt to analyze the dynamical behavior of this problem is made by coupling different codes, based on different numerical techniques, in a single tool. The simulations are presented and discussed.
Resumo:
The appearance of the open code paradigm and the demands of social movements have permeated the ways in which today’s cultural institutions are organized. This article analyzes the birth of a new critical and cooperative spatiality and how it is transforming current modes of cultural research and production. It centers on the potential for establishing the new means of cooperation that are being tested in what are defined as collaborative artistic laboratories. These are hybrid spaces of research and creation based on networked and cooperative structures producing a new societal-technical body that forces us to reconsider the traditional organic conditions of the productive scenarios of knowledge and artistic practice.
Resumo:
Dissertação de mestrado, Biologia Marinha, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2015
Resumo:
This paper examines the changing production ecology of British pre-school television in light of developments since the mid-1990s and the specific role played by the BBC. Underpinning the research is the perception that pre-school television is characterised by a complex set of industry relationships and dependencies that demands content which needs to satisfy a wide range of international circumstances and commercial prerogatives. For the BBC this has created tension between its public service goals and commercial priorities. Pre-school programming began in Britain in 1950, but it was not until the mid-1990s that Britain emerged as a leading producer of pre-school programming worldwide with government/industry reports regularly identifying the children’s production sector as an important contributor to exports. The rise of pre-school niche channels (CBeebies, Nick Junior, Playhouse Disney), audience fragmentation and the internationalisation and commercialisation of markets have radically altered the funding base of children’s television and the relationships that the BBC enjoys with key players. The international success of much of its pre-school programming is based on the relationships it enjoys with independent producers who generate significant revenues from programme-related consumer products. This paper focuses on the complex and changing relationships between the BBC, independent producers, and financiers, that constitute the production ecology of pre-school television and shape its output. Within the broader setting of cultural production and global trends the paper investigates the following questions: 1) In the light of changes to the sector since the mid-1990s, what makes pre-school television significant both generally and as an ideal public service project? 2) What is the nature of the current funding crisis in British children’s television and what implications does this crisis have for the BBC’s involvement in pre-school television? 3) How is the Corporation reacting to and managing the wider commercial, cultural, regulatory and technological forces that are likely to affect its strategies for the commissioning, production and acquisition of pre-school content?
Resumo:
This paper considers the following question—where do computers, laptops and mobile phones come from and who produced them? Specific cases of digital labour are examined—the extraction of minerals in African mines under slave-like conditions; ICT manufacturing and assemblage in China (Foxconn); software engineering in India; call centre service work; software engineering at Google within Silicon Valley; and the digital labour of internet prosumers/users. Empirical data and empirical studies concerning these cases are systematically analysed and theoretically interpreted. The theoretical interpretations are grounded in Marxist political economy. The term ‘global value chain’ is criticised in favour of a complex and multidimensional understanding of Marx’s ‘mode of production’ for the purposes of conceptualizing digital labour. This kind of labour is transnational and involves various modes of production, relations of production and organisational forms (in the context of the productive forces). There is a complex global division of digital labour that connects and articulates various forms of productive forces, exploitation, modes of production, and variations within the dominant capitalist mode of production.
Resumo:
Landwirtschaft spielt eine zentrale Rolle im Erdsystem. Sie trägt durch die Emission von CO2, CH4 und N2O zum Treibhauseffekt bei, kann Bodendegradation und Eutrophierung verursachen, regionale Wasserkreisläufe verändern und wird außerdem stark vom Klimawandel betroffen sein. Da all diese Prozesse durch die zugrunde liegenden Nährstoff- und Wasserflüsse eng miteinander verknüpft sind, sollten sie in einem konsistenten Modellansatz betrachtet werden. Dennoch haben Datenmangel und ungenügendes Prozessverständnis dies bis vor kurzem auf der globalen Skala verhindert. In dieser Arbeit wird die erste Version eines solchen konsistenten globalen Modellansatzes präsentiert, wobei der Schwerpunkt auf der Simulation landwirtschaftlicher Erträge und den resultierenden N2O-Emissionen liegt. Der Grund für diese Schwerpunktsetzung liegt darin, dass die korrekte Abbildung des Pflanzenwachstums eine essentielle Voraussetzung für die Simulation aller anderen Prozesse ist. Des weiteren sind aktuelle und potentielle landwirtschaftliche Erträge wichtige treibende Kräfte für Landnutzungsänderungen und werden stark vom Klimawandel betroffen sein. Den zweiten Schwerpunkt bildet die Abschätzung landwirtschaftlicher N2O-Emissionen, da bislang kein prozessbasiertes N2O-Modell auf der globalen Skala eingesetzt wurde. Als Grundlage für die globale Modellierung wurde das bestehende Agrarökosystemmodell Daycent gewählt. Neben der Schaffung der Simulationsumgebung wurden zunächst die benötigten globalen Datensätze für Bodenparameter, Klima und landwirtschaftliche Bewirtschaftung zusammengestellt. Da für Pflanzzeitpunkte bislang keine globale Datenbasis zur Verfügung steht, und diese sich mit dem Klimawandel ändern werden, wurde eine Routine zur Berechnung von Pflanzzeitpunkten entwickelt. Die Ergebnisse zeigen eine gute Übereinstimmung mit Anbaukalendern der FAO, die für einige Feldfrüchte und Länder verfügbar sind. Danach wurde das Daycent-Modell für die Ertragsberechnung von Weizen, Reis, Mais, Soja, Hirse, Hülsenfrüchten, Kartoffel, Cassava und Baumwolle parametrisiert und kalibriert. Die Simulationsergebnisse zeigen, dass Daycent die wichtigsten Klima-, Boden- und Bewirtschaftungseffekte auf die Ertragsbildung korrekt abbildet. Berechnete Länderdurchschnitte stimmen gut mit Daten der FAO überein (R2 = 0.66 für Weizen, Reis und Mais; R2 = 0.32 für Soja), und räumliche Ertragsmuster entsprechen weitgehend der beobachteten Verteilung von Feldfrüchten und subnationalen Statistiken. Vor der Modellierung landwirtschaftlicher N2O-Emissionen mit dem Daycent-Modell stand eine statistische Analyse von N2O-und NO-Emissionsmessungen aus natürlichen und landwirtschaftlichen Ökosystemen. Die als signifikant identifizierten Parameter für N2O (Düngemenge, Bodenkohlenstoffgehalt, Boden-pH, Textur, Feldfrucht, Düngersorte) und NO (Düngemenge, Bodenstickstoffgehalt, Klima) entsprechen weitgehend den Ergebnissen einer früheren Analyse. Für Emissionen aus Böden unter natürlicher Vegetation, für die es bislang keine solche statistische Untersuchung gab, haben Bodenkohlenstoffgehalt, Boden-pH, Lagerungsdichte, Drainierung und Vegetationstyp einen signifikanten Einfluss auf die N2O-Emissionen, während NO-Emissionen signifikant von Bodenkohlenstoffgehalt und Vegetationstyp abhängen. Basierend auf den daraus entwickelten statistischen Modellen betragen die globalen Emissionen aus Ackerböden 3.3 Tg N/y für N2O, und 1.4 Tg N/y für NO. Solche statistischen Modelle sind nützlich, um Abschätzungen und Unsicherheitsbereiche von N2O- und NO-Emissionen basierend auf einer Vielzahl von Messungen zu berechnen. Die Dynamik des Bodenstickstoffs, insbesondere beeinflusst durch Pflanzenwachstum, Klimawandel und Landnutzungsänderung, kann allerdings nur durch die Anwendung von prozessorientierten Modellen berücksichtigt werden. Zur Modellierung von N2O-Emissionen mit dem Daycent-Modell wurde zunächst dessen Spurengasmodul durch eine detailliertere Berechnung von Nitrifikation und Denitrifikation und die Berücksichtigung von Frost-Auftau-Emissionen weiterentwickelt. Diese überarbeitete Modellversion wurde dann an N2O-Emissionsmessungen unter verschiedenen Klimaten und Feldfrüchten getestet. Sowohl die Dynamik als auch die Gesamtsummen der N2O-Emissionen werden befriedigend abgebildet, wobei die Modelleffizienz für monatliche Mittelwerte zwischen 0.1 und 0.66 für die meisten Standorte liegt. Basierend auf der überarbeiteten Modellversion wurden die N2O-Emissionen für die zuvor parametrisierten Feldfrüchte berechnet. Emissionsraten und feldfruchtspezifische Unterschiede stimmen weitgehend mit Literaturangaben überein. Düngemittelinduzierte Emissionen, die momentan vom IPCC mit 1.25 +/- 1% der eingesetzten Düngemenge abgeschätzt werden, reichen von 0.77% (Reis) bis 2.76% (Mais). Die Summe der berechneten Emissionen aus landwirtschaftlichen Böden beträgt für die Mitte der 1990er Jahre 2.1 Tg N2O-N/y, was mit den Abschätzungen aus anderen Studien übereinstimmt.
Resumo:
This paper seeks to illustrate the point that physical inconsistencies between thermodynamics and dynamics usually introduce nonconservative production/destruction terms in the local total energy balance equation in numerical ocean general circulation models (OGCMs). Such terms potentially give rise to undesirable forces and/or diabatic terms in the momentum and thermodynamic equations, respectively, which could explain some of the observed errors in simulated ocean currents and water masses. In this paper, a theoretical framework is developed to provide a practical method to determine such nonconservative terms, which is illustrated in the context of a relatively simple form of the hydrostatic Boussinesq primitive equation used in early versions of OGCMs, for which at least four main potential sources of energy nonconservation are identified; they arise from: (1) the “hanging” kinetic energy dissipation term; (2) assuming potential or conservative temperature to be a conservative quantity; (3) the interaction of the Boussinesq approximation with the parameterizations of turbulent mixing of temperature and salinity; (4) some adiabatic compressibility effects due to the Boussinesq approximation. In practice, OGCMs also possess spurious numerical energy sources and sinks, but they are not explicitly addressed here. Apart from (1), the identified nonconservative energy sources/sinks are not sign definite, allowing for possible widespread cancellation when integrated globally. Locally, however, these terms may be of the same order of magnitude as actual energy conversion terms thought to occur in the oceans. Although the actual impact of these nonconservative energy terms on the overall accuracy and physical realism of the oceans is difficult to ascertain, an important issue is whether they could impact on transient simulations, and on the transition toward different circulation regimes associated with a significant reorganization of the different energy reservoirs. Some possible solutions for improvement are examined. It is thus found that the term (2) can be substantially reduced by at least one order of magnitude by using conservative temperature instead of potential temperature. Using the anelastic approximation, however, which was initially thought as a possible way to greatly improve the accuracy of the energy budget, would only marginally reduce the term (4) with no impact on the terms (1), (2) and (3).
Resumo:
The article looks at the most recent TV adaptations of the Grimms’ fairy tales by public broadcasting. Realized and marketed as a season which started in 2008,the thirty-four currently existing individual films constitute a significant national project that presents highly appealing notions of the German past to an audience divided over national conflict and demands of globalization. With children and adolescents at the centre, the films offer the young as a generation of moral superiority that facilitates social harmony and moral consensus. This post-unification utopia is beautifully realized on screen but rests on very conservative assumptions about gender, social driving forces, and political order.
Resumo:
We show that the tail of the chiral two-pion exchange nucleon-nucleon potential is proportional to the pion-nucleon (πN) scalar form factor and discuss how it can be translated into effective scalar meson interactions. We then construct a kernel for the process NN → πNN, due to the exchange of two pions, which may be used in either three-body forces or pion production in NN scattering. Our final expression involves a partial cancellation among three terms, due to chiral symmetry, but the net result is still important. We also find that, at large internucleon distances, the kernel has the same spatial dependence as the central NN potential and we produce expressions relating these processes directly.
Resumo:
We present a stochastic approach to nonequilibrium thermodynamics based on the expression of the entropy production rate advanced by Schnakenberg for systems described by a master equation. From the microscopic Schnakenberg expression we get the macroscopic bilinear form for the entropy production rate in terms of fluxes and forces. This is performed by placing the system in contact with two reservoirs with distinct sets of thermodynamic fields and by assuming an appropriate form for the transition rate. The approach is applied to an interacting lattice gas model in contact with two heat and particle reservoirs. On a square lattice, a continuous symmetry breaking phase transition takes place such that at the nonequilibrium ordered phase a heat flow sets in even when the temperatures of the reservoirs are the same. The entropy production rate is found to have a singularity at the critical point of the linear-logarithm type.
Resumo:
The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.
Resumo:
Spectroscopy of the 1S-2S transition of antihydrogen confined in a neutral atom trap and comparison with the equivalent spectral line in hydrogen will provide an accurate test of CPT symmetry and the first one in a mixed baryon-lepton system. Also, with neutral antihydrogen atoms, the gravitational interaction between matter and antimatter can be tested unperturbed by the much stronger Coulomb forces.rnAntihydrogen is regularly produced at CERN's Antiproton Decelerator by three-body-recombination (TBR) of one antiproton and two positrons. The method requires injecting antiprotons into a cloud of positrons, which raises the average temperature of the antihydrogen atoms produced way above the typical 0.5 K trap depths of neutral atom traps. Therefore only very few antihydrogen atoms can be confined at a time. Precision measurements, like laser spectroscopy, will greatly benefit from larger numbers of simultaneously trapped antihydrogen atoms.rnTherefore, the ATRAP collaboration developed a different production method that has the potential to create much larger numbers of cold, trappable antihydrogen atoms. Positrons and antiprotons are stored and cooled in a Penning trap in close proximity. Laser excited cesium atoms collide with the positrons, forming Rydberg positronium, a bound state of an electron and a positron. The positronium atoms are no longer confined by the electric potentials of the Penning trap and some drift into the neighboring cloud of antiprotons where, in a second charge exchange collision, they form antihydrogen. The antiprotons remain at rest during the entire process, so much larger numbers of trappable antihydrogen atoms can be produced. Laser excitation is necessary to increase the efficiency of the process since the cross sections for charge-exchange collisions scale with the fourth power of the principal quantum number n.rnThis method, named double charge-exchange, was demonstrated by ATRAP in 2004. Since then, ATRAP constructed a new combined Penning Ioffe trap and a new laser system. The goal of this thesis was to implement the double charge-exchange method in this new apparatus and increase the number of antihydrogen atoms produced.rnCompared to our previous experiment, we could raise the numbers of positronium and antihydrogen atoms produced by two orders of magnitude. Most of this gain is due to the larger positron and antiproton plasmas available by now, but we could also achieve significant improvements in the efficiencies of the individual steps. We therefore showed that the double charge-exchange can produce comparable numbers of antihydrogen as the TBR method, but the fraction of cold, trappable atoms is expected to be much higher. Therefore this work is an important step towards precision measurements with trapped antihydrogen atoms.
Resumo:
Heroin prices are a reflection of supply and demand, and similar to any other market, profits motivate participation. The intent of this research is to examine the change in Afghan opium production due to political conflict affecting Europe’s heroin market and government policies. If the Taliban remain in power, or a new Afghan government is formed, the changes will affect the heroin market in Europe to a certain degree. In the heroin market, the degree of change is dependent on many socioeconomic forces such as law enforcement, corruption, and proximity to Afghanistan. An econometric model that examines the degree of these socioeconomic effects has not been applied to the heroin trade in Afghanistan before. This research uses a two-stage least squares econometric model to reveal the supply and demand of heroin in 36 different countries from the Middle East to Western Europe in 2008. An application of the two-stage least squares model to the heroin market in Europe will attempt to predict the socioeconomic consequences of Afghanistan opium production.
Resumo:
Diminishing crude oil and natural gas supplies, along with concern about greenhouse gas are major driving forces in the search for efficient renewable energy sources. The conversion of lignocellulosic biomass to energy and useful chemicals is a component of the solution. Ethanol is most commonly produced by enzymatic hydrolysis of complex carbohydrates to simple sugars followed by fermentation using yeast. C6Hl0O5 + H2O −Enxymes→ C6H12O6 −Yeast→ 2CH3CH2OH + 2C02 In the U.S. corn is the primary starting raw material for commercial ethanol production. However, there is insufficient corn available to meet the future demand for ethanol as a gasoline additive. Consequently a variety of processes are being developed for producing ethanol from biomass; among which is the NREL process for the production of ethanol from white hardwood. The objective of the thesis reported here was to perform a technical economic analysis of the hardwood to ethanol process. In this analysis a Greenfield plant was compared to co-locating the ethanol plant adjacent to a Kraft pulp mill. The advantage of the latter case is that facilities can be shared jointly for ethanol production and for the production of pulp. Preliminary process designs were performed for three cases; a base case size of 2205 dry tons/day of hardwood (52 million gallons of ethanol per year) as well as the two cases of half and double this size. The thermal efficiency of the NREL process was estimated to be approximately 36%; that is about 36% of the thermal energy in the wood is retained in the product ethanol and by-product electrical energy. The discounted cash flow rate of return on investment and the net present value methods of evaluating process alternatives were used to evaluate the economic feasibility of the NREL process. The minimum acceptable discounted cash flow rate of return after taxes was assumed to be 10%. In all of the process alternatives investigated, the dominant cost factors are the capital recovery charges and the cost of wood. The Greenfield NREL process is not economically viable with the cost of producing ethanol varying from $2.58 to $2.08/gallon for the half capacity and double capacity cases respectively. The co-location cases appear more promising due to reductions in capital costs. The most profitable co-location case resulted in a discounted cash flow rate of return improving from 8.5% for the half capacity case to 20.3% for the double capacity case. Due to economy of scale, the investments become more and more profitable as the size of the plant increases. This concept is limited by the amount of wood that can be delivered to the plant on a sustainable basis as well as the demand for ethanol within a reasonable distance of the plant.