996 resultados para CONVENTIONAL THEORY


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The last decade has witnessed very fast development in microfabrication technologies. The increasing industrial applications of microfluidic systems call for more intensive and systematic knowledge on this newly emerging field. Especially for gaseous flow and heat transfer at microscale, the applicability of conventional theories developed at macro scale is not yet completely validated; this is mainly due to scarce experimental data available in literature for gas flows. The objective of this thesis is to investigate these unclear elements by analyzing forced convection for gaseous flows through microtubes and micro heat exchangers. Experimental tests have been performed with microtubes having various inner diameters, namely 750 m, 510 m and 170 m, over a wide range of Reynolds number covering the laminar region, the transitional zone and also the onset region of the turbulent regime. The results show that conventional theory is able to predict the flow friction factor when flow compressibility does not appear and the effect of fluid temperature-dependent properties is insignificant. A double-layered microchannel heat exchanger has been designed in order to study experimentally the efficiency of a gas-to-gas micro heat exchanger. This microdevice contains 133 parallel microchannels machined into polished PEEK plates for both the hot side and the cold side. The microchannels are 200 µm high, 200 µm wide and 39.8 mm long. The design of the micro device has been made in order to be able to test different materials as partition foil with flexible thickness. Experimental tests have been carried out for five different partition foils, with various mass flow rates and flow configurations. The experimental results indicate that the thermal performance of the countercurrent and cross flow micro heat exchanger can be strongly influenced by axial conduction in the partition foil separating the hot gas flow and cold gas flow.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Die bedeutendste Folge der Luftverschmutzung ist eine erhöhte Konzentration an Ozon (O3) in der Troposphäre innerhalb der letzten 150 Jahre. Ozon ist ein photochemisches Oxidationsmittel und ein Treibhausgas, das als wichtigste Vorstufe des Hydroxyradikals OH die Oxidationskraft der Atmosphäre stark beeinflusst. Um die Oxidationskraft der Atmosphäre und ihren Einfluss auf das Klima verstehen zu können, ist es von großer Bedeutung ein detailliertes Wissen über die Photochemie des Ozons und seiner Vorläufer, den Stickoxiden (NOx), in der Troposphäre zu besitzen. Dies erfordert das Verstehen der Bildungs- und Abbaumechanismen von Ozon und seiner Vorläufer. Als eine für den chemischen Ozonabbau wichtige Region kann die vom Menschen weitgehend unberührte marine Grenzschicht (Marine boundary layer (MBL)) angesehen werden. Bisher wurden für diese Region jedoch kaum Spurengasmessungen durchgeführt, und so sind die dort ablaufenden photochemischen Prozesse wenig untersucht. Da etwa 70 % der Erdoberfläche mit Ozeanen bedeckt sind, können die in der marinen Granzschicht ablaufenden Prozesse als signifikant für die gesamte Atmosphäre angesehen werden. Dies macht eine genaue Untersuchung dieser Region interessant. Um die photochemische Produktion und den Abbau von Ozon abschätzen zu können und den Einfluss antrophogener Emissionen auf troposphärisches Ozon zu quantifizieren, sind aktuelle Messergebnisse von NOx im pptv-Bereich für diese Region erforderlich. Die notwendigen Messungen von NO, NO2, O3, JNO2, J(O1D), HO2, OH, ROx sowie einiger meteorologischer Parameter wurden während der Fahrt des französischen Forschungsschiffes Marion-Dufresne auf dem südlichen Atlantik (28°S-57°S, 46°W-34°E) im März 2007 durchgeführt. Dabei sind für NO und NO2 die bisher niedrigsten gemessenen Werte zu verzeichnen. Die während der Messcampagne gewonnen Daten wurden hinsichtlich Ihrer Übereinstimmung mit den Bedingungen des photochemischen stationären Gleichgewichts (photochemical steady state (PSS)) überprüft. Dabei konnte eine Abweichung vom PSS festgestellt werden, welche unter Bedingungen niedriger NOx-Konzentrationen (5 bis 25pptv) einen unerwarteten Trend im Leighton-Verhältnis bewirkt, der abhängig vom NOx Mischungsverhältnis und der JNO2 Intensität ist. Signifikante Abweichungen vom Verhältnis liegen bei einer Zunahme der JNO2 Intensität vor. Diese Ergebnisse zeigen, dass die Abweichung vom PSS nicht beim Minimum der NOx-Konzentrationen und der JNO2 Werte liegt, so wie es in bisherigen theoretischen Studien dargelegt wurde und können als Hinweis auf weitere photochemische Prozesse bei höheren JNO2-Werten in einem System mit niedrigem NOx verstanden werden. Das wichtigste Ergebnis dieser Untersuchung, ist die Verifizierung des Leighton-Verhältnisses, das zur Charakterisierung des PSS dient, bei sehr geringen NOx-Konzentrationen in der MBL. Die bei dieser Doktorarbeit gewonnenen Erkenntnisse beweisen, dass unter den Bedingungen der marinen Granzschicht rein photochemischer Abbau von Ozon stattfindet und als Hauptursache hierfür während des Tages die Photolyse gilt. Mit Hilfe der gemessenen Parameter wurde der kritische NO-Level auf Werte zwischen 5 und 9 pptv abgeschätzt, wobei diese Werte im Vergleich zu bisherigen Studien vergleichsweise niedrig sind. Möglicherweise bedeutet dies, dass das Ozon Produktion/ Abbau-Potential des südlichen Atlantiks deutlich stärker auf die Verfügbarkeit von NO reagiert, als es in anderen Regionen der Fall ist. Im Rahmen der Doktorarbeit wurde desweiteren ein direkter Vergleich der gemessenen Spezies mit dem Modelergebnis eines 3-dimensionalen Zirkulationsmodel zur Simulation atmosphären chemischer Prozesse (EMAC) entlang der exakten Schiffsstrecke durchgeführt. Um die Übereinstimmung der Messergebnisse mit dem bisherigen Verständnis der atmosphärischen Radikalchemie zu überprüfen, wurde ein Gleichgewichtspunktmodel entwickelt, das die während der Überfahrt erhaltenen Daten für Berechungen verwendet. Ein Vergleich zwischen der gemessenen und der modellierten ROx Konzentrationen in einer Umgebung mit niedrigem NOx zeigt, dass die herkömmliche Theorie zur Reproduktion der Beobachtungen unzureichend ist. Die möglichen Gründe hierfür und die Folgen werden in dieser Doktorarbeit diskutiert.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Para la teoría neoclásica convencional el funcionamiento de los mercados de crédito soporta dos dificultades básicas: la información imperfecta o asimétrica y la selección adversa del prestatario aunada al riesgo del no pago. En el otro extremo, más heterodoxo, el colectivismo, el capital social y la formación de redes informales y sociedades de crédito suelen reducir los costes de transacción ligados al problema de la financiación. Este tipo de organización facilita el desenvolvimiento del cooperativismo en la economía. En México, después de la Revolución y la conformación de un régimen político autoritario y vertical, el cooperativismo se constituyó en uno de los referentes de la organización campesina para conseguir crédito de la banca pública y privada. En el noroeste del país y la península de Baja California, la existencia de cooperativas pesqueras, ganaderas, agrícolas y de transporte significó un despunte de la actividad económica y uno de los canales del desarrollo de la región. El propósito del artículo es perfilar qué tipo de cooperativismo se articuló en el Distrito Norte de la península de Baja California y qué tipo de relación guardó con el movimiento cooperativista nacional entre 1930 y 1950

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Para la teoría neoclásica convencional el funcionamiento de los mercados de crédito soporta dos dificultades básicas: la información imperfecta o asimétrica y la selección adversa del prestatario aunada al riesgo del no pago. En el otro extremo, más heterodoxo, el colectivismo, el capital social y la formación de redes informales y sociedades de crédito suelen reducir los costes de transacción ligados al problema de la financiación. Este tipo de organización facilita el desenvolvimiento del cooperativismo en la economía. En México, después de la Revolución y la conformación de un régimen político autoritario y vertical, el cooperativismo se constituyó en uno de los referentes de la organización campesina para conseguir crédito de la banca pública y privada. En el noroeste del país y la península de Baja California, la existencia de cooperativas pesqueras, ganaderas, agrícolas y de transporte significó un despunte de la actividad económica y uno de los canales del desarrollo de la región. El propósito del artículo es perfilar qué tipo de cooperativismo se articuló en el Distrito Norte de la península de Baja California y qué tipo de relación guardó con el movimiento cooperativista nacional entre 1930 y 1950

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Para la teoría neoclásica convencional el funcionamiento de los mercados de crédito soporta dos dificultades básicas: la información imperfecta o asimétrica y la selección adversa del prestatario aunada al riesgo del no pago. En el otro extremo, más heterodoxo, el colectivismo, el capital social y la formación de redes informales y sociedades de crédito suelen reducir los costes de transacción ligados al problema de la financiación. Este tipo de organización facilita el desenvolvimiento del cooperativismo en la economía. En México, después de la Revolución y la conformación de un régimen político autoritario y vertical, el cooperativismo se constituyó en uno de los referentes de la organización campesina para conseguir crédito de la banca pública y privada. En el noroeste del país y la península de Baja California, la existencia de cooperativas pesqueras, ganaderas, agrícolas y de transporte significó un despunte de la actividad económica y uno de los canales del desarrollo de la región. El propósito del artículo es perfilar qué tipo de cooperativismo se articuló en el Distrito Norte de la península de Baja California y qué tipo de relación guardó con el movimiento cooperativista nacional entre 1930 y 1950

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La presente tesis revisa y analiza algunos aspectos fundamentales relativos al comportamiento de los sensores basados en resonadores piezoeléctricos TSM (Thickness Shear Mode), así como la aplicación de los mismos al estudio y caracterización de dos medios viscoelásticos de gran interés: los fluidos magnetoreológicos y los biofilms microbianos. El funcionamiento de estos sensores está basado en la medida de sus propiedades resonantes, las cuales varían al entrar en contacto con el material que se quiere analizar. Se ha realizado un análisis multifrecuencial, trabajando en varios modos de resonancia del transductor, en algunas aplicaciones incluso de forma simultánea (excitación pulsada). Se han revisado fenómenos como la presencia de microcontactos en la superficie del sensor y la resonancia de capas viscoelásticas de espesor finito, que pueden afectar a los sensores de cuarzo de manera contraria a lo que predice la teoría convencional (Sauerbrey y Kanazawa), pudiéndonos llevar a incrementos positivos de la frecuencia de resonancia. Además, se ha estudiado el efecto de una deposición no uniforme sobre el resonador piezoeléctrico. Para ello se han medido deposiciones de poliuretano, modelándose la respuesta del resonador con estas deposiciones mediante FEM. El modelo numérico permite estudiar el comportamiento del resonador al modificar distintas variables geométricas (espesor, superficie, no uniformidad y zona de deposición) de la capa depositada. Se ha demostrado que para espesores de entre un cuarto y media longitud de onda aproximadamente, una capa viscoelástica no uniforme sobre la superficie del sensor, amplifica el incremento positivo del desplazamiento de la frecuencia de resonancia en relación con una capa uniforme. Se ha analizado también el patrón geométrico de la sensibilidad del sensor, siendo también no uniforme sobre su superficie. Se han aplicado sensores TSM para estudiar los cambios viscoelásticos que se producen en varios fluidos magneto-reológicos (FMR) al aplicarles distintos esfuerzos de cizalla controlados por un reómetro. Se ha podido ver que existe una relación directa entre diversos parámetros reológicos obtenidos con el reómetro (fuerza normal, G’, G’’, velocidad de deformación, esfuerzo de cizalla…) y los parámetros acústicos, caracterizándose los FMR tanto en ausencia de campo magnético, como con campo magnético aplicado a distintas intensidades. Se han estudiado las ventajas que aporta esta técnica de medida sobre la técnica basada en un reómetro comercial, destacando que se consigue caracterizar con mayor detalle algunos aspectos relevantes del fluido como son la deposición de partículas (estabilidad del fluido), el proceso de ruptura de las estructuras formadas en los FMR tanto en presencia como en ausencia de campo magnético y la rigidez de los microcontactos que aparecen entre partículas y superficies. También se han utilizado sensores de cuarzo para monitorear en tiempo real la formación de biofilms de Staphylococcus epidermidis y Eschericia coli sobre los propios resonadores de cristal de cuarzo sin ningún tipo de recubrimiento, realizándose ensayos con cepas que presentan distinta capacidad de producir biofilm. Se mostró que, una vez que se ha producido una primera adhesión homogénea de las bacterias al sustrato, podemos considerar el biofilm como una capa semi-infinita, de la cual el sensor de cuarzo refleja las propiedades viscoelásticas de la región inmediatamente contigua al resonador, no siendo sensible a lo que sucede en estratos superiores del biofilm. Los experimentos han permitido caracterizar el módulo de rigidez complejo de los biofilms a varias frecuencias, mostrándose que el parámetro característico que indica la adhesión de un biofilm tanto en el caso de S. epidermidis como de E. coli, es el incremento de G’ (relacionado con la elasticidad o rigidez de la capa), el cual viene ligado a un incremento de la frecuencia de resonancia del sensor. ABSTRACT This thesis reviews and analyzes some key aspects of the behavior of sensors based on piezoelectric resonators TSM (Thickness Shear Mode) and their applications to the study and characterization in two viscoelastic media of great interest: magnetorheological fluids and microbial biofilms. The operation of these sensors is based on the analysis of their resonant properties that vary in contact with the material to be analyzed. We have made a multi-frequency analysis, working in several modes of resonance of the transducer, in some applications even simultaneously (by impulse excitation). We reviewed some phenomena as the presence of micro-contacts on the sensor surface and the resonance of viscoelastic layers of finite thickness, which can affect quartz sensors contrary to the conventional theory predictions (Sauerbrey and Kanazawa), leading to positive resonant frequency shifts. In addition, we studied the effect of non-uniform deposition on the piezoelectric resonator. Polyurethane stools have been measured, being the resonator response to these depositions modeled by FEM. The numerical model allows studying the behavior of the resonator when different geometric variables (thickness, surface non-uniformity and deposition zone) of the deposited layer are modified. It has been shown that for thicknesses between a quarter and a half of a wavelength approximately, non-uniform deposits on the sensor surface amplify the positive increase of the resonance frequency displacement compared to a uniform layer. The geometric pattern of the sensor sensitivity was also analyzed, being also non-uniform over its surface. TSM sensors have been applied to study the viscoelastic changes occurring in various magneto-rheological fluids (FMR) when subjected to different controlled shear stresses driven by a rheometer. It has been seen that there is a direct relationship between various rheological parameters obtained with the rheometer (normal force, G', G'', stress, shear rate ...) and the acoustic parameters, being the FMR characterized both in the absence of magnetic field, and when the magnetic field was applied at different intensities. We have studied the advantages of this technique over the characterization methods based on commercial rheometers, noting that TSM sensors are more sensitive to some relevant aspects of the fluid as the deposition of particles (fluid stability), the breaking process of the structures formed in the FMR both in the presence and absence of magnetic field, and the rigidity of the micro-contacts appearing between particles and surfaces. TSM sensors have also been used to monitor in real time the formation of biofilms of Staphylococcus epidermidis and Escherichia coli on the quartz crystal resonators themselves without any coating, performing tests with strains having different ability to produce biofilm. It was shown that, once a first homogeneous adhesion of bacteria was produced on the substrate, the biofilm can be considered as a semi-infinite layer and the quartz sensor reflects only the viscoelastic properties of the region immediately adjacent to the resonator, not being sensitive to what is happening in upper layers of the biofilm. The experiments allow the evaluation of the biofilm complex stiffness module at various frequencies, showing that the characteristic parameter that indicates the adhesion of a biofilm for the case of both S. epidermidis and E. coli, is an increased G' (related to the elasticity or stiffness of the layer), which is linked to an increase in the resonance frequency of the sensor.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Analysis of the use of ICT in the aerospace industry has prompted the detailed investigation of an inventory-planning problem. There is a special class of inventory, consisting of expensive repairable spares for use in support of aircraft operations. These items, called rotables, are not well served by conventional theory and systems for inventory management. The context of the problem, the aircraft maintenance industry sector, is described in order to convey some of its special characteristics in the context of operations management. A literature review is carried out to seek existing theory that can be applied to rotable inventory and to identify a potential gap into which newly developed theory could contribute. Current techniques for rotable planning are identified in industry and the literature: these methods are modelled and tested using inventory and operational data obtained in the field. In the expectation that current practice leaves much scope for improvement, several new models are proposed. These are developed and tested on the field data for comparison with current practice. The new models are revised following testing to give improved versions. The best model developed and tested here comprises a linear programming optimisation, which finds an optimal level of inventory for multiple test cases, reflecting changing operating conditions. The new model offers an inventory plan that is up to 40% less expensive than that determined by current practice, while maintaining required performance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The effect on the scattering amplitude of the existence of a pole in the angular momentum plane near J = 1 in the channel with the quantum numbers of the vacuum is calculated. This is then compared with a fourth order calculation of the scattering of neutral vector mesons from a fermion pair field in the limit of large momentum transfer. The presence of the third double spectral function in the perturbation amplitude complicates the identification of pole trajectory parameters, and the limitations of previous methods of treating this are discussed. A gauge invariant scheme for extracting the contribution of the vacuum trajectory is presented which gives agreement with unitarity predictions, but further calculations must be done to determine the position and slope of the trajectory at s = 0. The residual portion of the amplitude is compared with the Gribov singularity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: In the context of the established finding that theory-of-mind (ToM) growth is seriously delayed in late-signing deaf children, and some evidence of equivalent delays in those learning speech with conventional hearing aids, this study's novel contribution was to explore ToM development in deaf children with cochlear implants. Implants can substantially boost auditory acuity and rates of language growth. Despite the implant, there are often problems socialising with hearing peers and some language difficulties, lending special theoretical interest to the present comparative design. Methods: A total of 52 children aged 4 to 12 years took a battery of false belief tests of ToM. There were 26 oral deaf children, half with implants and half with hearing aids, evenly divided between oral-only versus sign-plus-oral schools. Comparison groups of age-matched high-functioning children with autism and younger hearing children were also included. Results: No significant ToM differences emerged between deaf children with implants and those with hearing aids, nor between those in oral-only versus sign-plus-oral schools. Nor did the deaf children perform any better on the ToM tasks than their age peers with autism. Hearing preschoolers scored significantly higher than all other groups. For the deaf and the autistic children, as well as the preschoolers, rate of language development and verbal maturity significantly predicted variability in ToM, over and above chronological age. Conclusions: The finding that deaf children with cochlear implants are as delayed in ToM development as children with autism and their deaf peers with hearing aids or late sign language highlights the likely significance of peer interaction and early fluent communication with peers and family, whether in sign or in speech, in order to optimally facilitate the growth of social cognition and language.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis discusses various aspects of the integrity monitoring of GPS applied to civil aircraft navigation in different phases of flight. These flight phases include en route, terminal, non-precision approach and precision approach. The thesis includes four major topics: probability problem of GPS navigation service, risk analysis of aircraft precision approach and landing, theoretical analysis of Receiver Autonomous Integrity Monitoring (RAIM) techniques and RAIM availability, and GPS integrity monitoring at a ground reference station. Particular attention is paid to the mathematical aspects of the GPS integrity monitoring system. The research has been built upon the stringent integrity requirements defined by civil aviation community, and concentrates on the capability and performance investigation of practical integrity monitoring systems with rigorous mathematical and statistical concepts and approaches. Major contributions of this research are: • Rigorous integrity and continuity risk analysis for aircraft precision approach. Based on the joint probability density function of the affecting components, the integrity and continuity risks of aircraft precision approach with DGPS were computed. This advanced the conventional method of allocating the risk probability. • A theoretical study of RAIM test power. This is the first time a theoretical study on RAIM test power based on the probability statistical theory has been presented, resulting in a new set of RAIM criteria. • Development of a GPS integrity monitoring and DGPS quality control system based on GPS reference station. A prototype of GPS integrity monitoring and DGPS correction prediction system has been developed and tested, based on the A USN A V GPS base station on the roof of QUT ITE Building.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of radar was developed for the estimation of the distance (range) and velocity of a target from a receiver. The distance measurement is obtained by measuring the time taken for the transmitted signal to propagate to the target and return to the receiver. The target's velocity is determined by measuring the Doppler induced frequency shift of the returned signal caused by the rate of change of the time- delay from the target. As researchers further developed conventional radar systems it become apparent that additional information was contained in the backscattered signal and that this information could in fact be used to describe the shape of the target itself. It is due to the fact that a target can be considered to be a collection of individual point scatterers, each of which has its own velocity and time- delay. DelayDoppler parameter estimation of each of these point scatterers thus corresponds to a mapping of the target's range and cross range, thus producing an image of the target. Much research has been done in this area since the early radar imaging work of the 1960s. At present there are two main categories into which radar imaging falls. The first of these is related to the case where the backscattered signal is considered to be deterministic. The second is related to the case where the backscattered signal is of a stochastic nature. In both cases the information which describes the target's scattering function is extracted by the use of the ambiguity function, a function which correlates the backscattered signal in time and frequency with the transmitted signal. In practical situations, it is often necessary to have the transmitter and the receiver of the radar system sited at different locations. The problem in these situations is 'that a reference signal must then be present in order to calculate the ambiguity function. This causes an additional problem in that detailed phase information about the transmitted signal is then required at the receiver. It is this latter problem which has led to the investigation of radar imaging using time- frequency distributions. As will be shown in this thesis, the phase information about the transmitted signal can be extracted from the backscattered signal using time- frequency distributions. The principle aim of this thesis was in the development, and subsequent discussion into the theory of radar imaging, using time- frequency distributions. Consideration is first given to the case where the target is diffuse, ie. where the backscattered signal has temporal stationarity and a spatially white power spectral density. The complementary situation is also investigated, ie. where the target is no longer diffuse, but some degree of correlation exists between the time- frequency points. Computer simulations are presented to demonstrate the concepts and theories developed in the thesis. For the proposed radar system to be practically realisable, both the time- frequency distributions and the associated algorithms developed must be able to be implemented in a timely manner. For this reason an optical architecture is proposed. This architecture is specifically designed to obtain the required time and frequency resolution when using laser radar imaging. The complex light amplitude distributions produced by this architecture have been computer simulated using an optical compiler.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Urban agriculture plays an increasingly vital role in supplying food to urban populations. Changes in Information and Communications Technology (ICT) are already driving widespread change in diverse food-related industries such as retail, hospitality and marketing. It is reasonable to suspect that the fields of ubiquitous technology, urban informatics and social media equally have a lot to offer the evolution of core urban food systems. We use communicative ecology theory to describe emerging innovations in urban food systems according to their technical, discursive and social components. We conclude that social media in particular accentuate fundamental social interconnections normally effaced by conventional industrialised approaches to food production and consumption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While existing multi-biometic Dempster-Shafer the- ory fusion approaches have demonstrated promising perfor- mance, they do not model the uncertainty appropriately, sug- gesting that further improvement can be achieved. This research seeks to develop a unified framework for multimodal biometric fusion to take advantage of the uncertainty concept of Dempster- Shafer theory, improving the performance of multi-biometric authentication systems. Modeling uncertainty as a function of uncertainty factors affecting the recognition performance of the biometric systems helps to address the uncertainty of the data and the confidence of the fusion outcome. A weighted combination of quality measures and classifiers performance (Equal Error Rate) are proposed to encode the uncertainty concept to improve the fusion. We also found that quality measures contribute unequally to the recognition performance, thus selecting only significant factors and fusing them with a Dempster-Shafer approach to generate an overall quality score play an important role in the success of uncertainty modeling. The proposed approach achieved a competitive performance (approximate 1% EER) in comparison with other Dempster-Shafer based approaches and other conventional fusion approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Agriculture’s contribution to climate change is controversial as it is a significant source of greenhouse gases but also a sink of carbon. Hence its economic and technological potential to mitigate climate change have been argued to be noteworthy. However, social profitability of emission mitigation is a result from factors among emission reductions such as surface water quality impact or profit from production. Consequently, to value comprehensive results of agricultural climate emission mitigation practices, these co-effects to environment and economics should be taken into account. The objective of this thesis was to develop an integrated economic and ecological model to analyse the social welfare of crop cultivation in Finland on distinctive cultivation technologies, conventional tillage and conservation tillage (no-till). Further, we ask whether it would be privately or socially profitable to allocate some of barley cultivation for alternative land use, such as green set-aside or afforestation, when production costs, GHG’s and water quality impacts are taken into account. In the theoretical framework we depict the optimal input use and land allocation choices in terms of environmental impacts and profit from production and derive the optimal tax and payment policies for climate and water quality friendly land allocation. The empirical application of the model uses Finnish data about production cost and profit structure and environmental impacts. According to our results, given emission mitigation practices are not self-evidently beneficial for farmers or society. On the contrary, in some cases alternative land allocation could even reduce social welfare, profiting conventional crop cultivation. This is the case regarding mineral soils such as clay and silt soils. On organic agricultural soils, climate mitigation practices, in this case afforestation and green fallow give more promising results, decreasing climate emissions and nutrient runoff to water systems. No-till technology does not seem to profit climate mitigation although it does decrease other environmental impacts. Nevertheless, the data behind climate emission mitigation practices impact to production and climate is limited and partly contradictory. More specific experiment studies on interaction of emission mitigation practices and environment would be needed. Further study would be important. Particularly area specific production and environmental factors and also food security and safety and socio-economic impacts should be taken into account.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The density-wave theory of Ramakrishnan and Yussouff is extended to provide a scheme for describing dislocations and other topological defects in crystals. Quantitative calculations are presented for the order-parameter profiles, the atomic configuration, and the free energy of a screw dislocation with Burgers vector b=(a/2, a/2, a/2) in a bcc solid. These calculations are done using a simple parametrization of the direct correlation function and a gradient expansion. It is conventional to express the free energy of the dislocation in a crystal of size R as (λb2/4π)ln(αR/‖b‖), where λ is the shear elastic constant, and α is a measure of the core energy. Our results yield for Na the value α≃1.94a/(‖c1’’‖)1/2 (≃1.85) at the freezing temperature (371 K) and α≃2.48a/(‖c1’’‖)1/2 at 271 K, where c1’’ is the curvature of the first peak of the direct correlation function c(q). Detailed results for the density distribution in the dislocation, particularly the core region, are also presented. These show that the dislocation core has a columnar character. To our knowledge, this study represents the first calculation of dislocation structure, including the core, within the framework of an order-parameter theory and incorporating thermal effects.