911 resultados para System modelling


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The pumping processes requiring wide range of flow are often equipped with parallelconnected centrifugal pumps. In parallel pumping systems, the use of variable speed control allows that the required output for the process can be delivered with a varying number of operated pump units and selected rotational speed references. However, the optimization of the parallel-connected rotational speed controlled pump units often requires adaptive modelling of both parallel pump characteristics and the surrounding system in varying operation conditions. The available information required for the system modelling in typical parallel pumping applications such as waste water treatment and various cooling and water delivery pumping tasks can be limited, and the lack of real-time operation point monitoring often sets limits for accurate energy efficiency optimization. Hence, alternatives for easily implementable control strategies which can be adopted with minimum system data are necessary. This doctoral thesis concentrates on the methods that allow the energy efficient use of variable speed controlled parallel pumps in system scenarios in which the parallel pump units consist of a centrifugal pump, an electric motor, and a frequency converter. Firstly, the suitable operation conditions for variable speed controlled parallel pumps are studied. Secondly, methods for determining the output of each parallel pump unit using characteristic curve-based operation point estimation with frequency converter are discussed. Thirdly, the implementation of the control strategy based on real-time pump operation point estimation and sub-optimization of each parallel pump unit is studied. The findings of the thesis support the idea that the energy efficiency of the pumping can be increased without the installation of new, more efficient components in the systems by simply adopting suitable control strategies. An easily implementable and adaptive control strategy for variable speed controlled parallel pumping systems can be created by utilizing the pump operation point estimation available in modern frequency converters. Hence, additional real-time flow metering, start-up measurements, and detailed system model are unnecessary, and the pumping task can be fulfilled by determining a speed reference for each parallel-pump unit which suggests the energy efficient operation of the pumping system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The 21st century has brought new challenges for forest management at a time when globalization in world trade is increasing and global climate change is becoming increasingly apparent. In addition to various goods and services like food, feed, timber or biofuels being provided to humans, forest ecosystems are a large store of terrestrial carbon and account for a major part of the carbon exchange between the atmosphere and the land surface. Depending on the stage of the ecosystems and/or management regimes, forests can be either sinks, or sources of carbon. At the global scale, rapid economic development and a growing world population have raised much concern over the use of natural resources, especially forest resources. The challenging question is how can the global demands for forest commodities be satisfied in an increasingly globalised economy, and where could they potentially be produced? For this purpose, wood demand estimates need to be integrated in a framework, which is able to adequately handle the competition for land between major land-use options such as residential land or agricultural land. This thesis is organised in accordance with the requirements to integrate the simulation of forest changes based on wood extraction in an existing framework for global land-use modelling called LandSHIFT. Accordingly, the following neuralgic points for research have been identified: (1) a review of existing global-scale economic forest sector models (2) simulation of global wood production under selected scenarios (3) simulation of global vegetation carbon yields and (4) the implementation of a land-use allocation procedure to simulate the impact of wood extraction on forest land-cover. Modelling the spatial dynamics of forests on the global scale requires two important inputs: (1) simulated long-term wood demand data to determine future roundwood harvests in each country and (2) the changes in the spatial distribution of woody biomass stocks to determine how much of the resource is available to satisfy the simulated wood demands. First, three global timber market models are reviewed and compared in order to select a suitable economic model to generate wood demand scenario data for the forest sector in LandSHIFT. The comparison indicates that the ‘Global Forest Products Model’ (GFPM) is most suitable for obtaining projections on future roundwood harvests for further study with the LandSHIFT forest sector. Accordingly, the GFPM is adapted and applied to simulate wood demands for the global forestry sector conditional on selected scenarios from the Millennium Ecosystem Assessment and the Global Environmental Outlook until 2050. Secondly, the Lund-Potsdam-Jena (LPJ) dynamic global vegetation model is utilized to simulate the change in potential vegetation carbon stocks for the forested locations in LandSHIFT. The LPJ data is used in collaboration with spatially explicit forest inventory data on aboveground biomass to allocate the demands for raw forest products and identify locations of deforestation. Using the previous results as an input, a methodology to simulate the spatial dynamics of forests based on wood extraction is developed within the LandSHIFT framework. The land-use allocation procedure specified in the module translates the country level demands for forest products into woody biomass requirements for forest areas, and allocates these on a five arc minute grid. In a first version, the model assumes only actual conditions through the entire study period and does not explicitly address forest age structure. Although the module is in a very preliminary stage of development, it already captures the effects of important drivers of land-use change like cropland and urban expansion. As a first plausibility test, the module performance is tested under three forest management scenarios. The module succeeds in responding to changing inputs in an expected and consistent manner. The entire methodology is applied in an exemplary scenario analysis for India. A couple of future research priorities need to be addressed, particularly the incorporation of plantation establishments; issue of age structure dynamics; as well as the implementation of a new technology change factor in the GFPM which can allow the specification of substituting raw wood products (especially fuelwood) by other non-wood products.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Land use has become a force of global importance, considering that 34% of the Earth’s ice-free surface was covered by croplands or pastures in 2000. The expected increase in global human population together with eminent climate change and associated search for energy sources other than fossil fuels can, through land-use and land-cover changes (LUCC), increase the pressure on nature’s resources, further degrade ecosystem services, and disrupt other planetary systems of key importance to humanity. This thesis presents four modeling studies on the interplay between LUCC, increased production of biofuels and climate change in four selected world regions. In the first study case two new crop types (sugarcane and jatropha) are parameterized in the LPJ for managed Lands dynamic global vegetation model for calculation of their potential productivity. Country-wide spatial variation in the yields of sugarcane and jatropha incurs into substantially different land requirements to meet the biofuel production targets for 2015 in Brazil and India, depending on the location of plantations. Particularly the average land requirements for jatropha in India are considerably higher than previously estimated. These findings indicate that crop zoning is important to avoid excessive LUCC. In the second study case the LandSHIFT model of land-use and land-cover changes is combined with life cycle assessments to investigate the occurrence and extent of biofuel-driven indirect land-use changes (ILUC) in Brazil by 2020. The results show that Brazilian biofuels can indeed cause considerable ILUC, especially by pushing the rangeland frontier into the Amazonian forests. The carbon debt caused by such ILUC would result in no carbon savings (from using plant-based ethanol and biodiesel instead of fossil fuels) before 44 years for sugarcane ethanol and 246 years for soybean biodiesel. The intensification of livestock grazing could avoid such ILUC. We argue that such an intensification of livestock should be supported by the Brazilian biofuel sector, based on the sector’s own interest in minimizing carbon emissions. In the third study there is the development of a new method for crop allocation in LandSHIFT, as influenced by the occurrence and capacity of specific infrastructure units. The method is exemplarily applied in a first assessment of the potential availability of land for biogas production in Germany. The results indicate that Germany has enough land to fulfill virtually all (90 to 98%) its current biogas plant capacity with only cultivated feedstocks. Biogas plants located in South and Southwestern (North and Northeastern) Germany might face more (less) difficulties to fulfill their capacities with cultivated feedstocks, considering that feedstock transport distance to plants is a crucial issue for biogas production. In the fourth study an adapted version of LandSHIFT is used to assess the impacts of contrasting scenarios of climate change and conservation targets on land use in the Brazilian Amazon. Model results show that severe climate change in some regions by 2050 can shift the deforestation frontier to areas that would experience low levels of human intervention under mild climate change (such as the western Amazon forests or parts of the Cerrado savannas). Halting deforestation of the Amazon and of the Brazilian Cerrado would require either a reduction in the production of meat or an intensification of livestock grazing in the region. Such findings point out the need for an integrated/multicisciplinary plan for adaptation to climate change in the Amazon. The overall conclusions of this thesis are that (i) biofuels must be analyzed and planned carefully in order to effectively reduce carbon emissions; (ii) climate change can have considerable impacts on the location and extent of LUCC; and (iii) intensification of grazing livestock represents a promising venue for minimizing the impacts of future land-use and land-cover changes in Brazil.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En la actualidad, Internet es una herramienta con mucha importancia en la educación. El lenguaje Java permite cursos con un carácter más interactivo, de ejecución más rápida y más fácilmente transportables. El interés generado en este campo hace necesarias herramientas adecuadas para la elaboración de cursos, que deben permitir expresar todas las posibilidades ofrecidas por la enseñanza a través de Internet. La programación orientada a objetos surgió en la década de los años sesenta, con un lenguaje que ya incorporaba muchas de las ideas introducidas posteriormente con el lenguaje de programación Incremento de C (C++).. En esta investigación se desarrollan varias herramientas de simulación avanzadas que simplifican la generación de cursos educativos para Internet. El lenguaje empleado para ello es una extensión del antiguo Continuous System Modelling Program (CSMP) de IBM. Se denomina al nuevo lenguaje OOCSMP, porque se añaden extensiones a CSMP que le permiten estar orientado a objetos. Estas construcciones permiten simular con una mayor facilidad sistemas complejos basados en la interacción mutua de muchos agentes similares. Se construye un compilador en C++, a Compiler for the OOCSMP Language (C-OOL) que puede generar código C++ o bien Java y/o páginas en lenguaje de marcado hipertextual (HTML). C-OOL genera una interfaz de usuario completamente configurable mediante opciones de compilación. La interfaz permite una simulación interactiva y visual, y la exploración del problema al ser capaz, a diferencia del sistema CSMP anterior en que no era posible, de responder a preguntas del tipo ¿qué pasaría si...? C-OOL también es capaz de compilar los antiguos modelos CSMP, manteniendo las ventajas expuestas. Es compatible con el mayor número posible de navegadores de Internet. Para la validación del lenguaje, se generan automáticamente varios cursos para Internet. El lenguaje OOCSMP, el compilador C-OOL y los cursos generados, se describen con detalle en el este trabajo, acompañados de numerosos ejemplos de uso y aplicación del lenguaje y el compilador..

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The land/sea warming contrast is a phenomenon of both equilibrium and transient simulations of climate change: large areas of the land surface at most latitudes undergo temperature changes whose amplitude is more than those of the surrounding oceans. Using idealised GCM experiments with perturbed SSTs, we show that the land/sea contrast in equilibrium simulations is associated with local feedbacks and the hydrological cycle over land, rather than with externally imposed radiative forcing. This mechanism also explains a large component of the land/sea contrast in transient simulations as well. We propose a conceptual model with three elements: (1) there is a spatially variable level in the lower troposphere at which temperature change is the same over land and sea; (2) the dependence of lapse rate on moisture and temperature causes different changes in lapse rate upon warming over land and sea, and hence a surface land/sea temperature contrast; (3) moisture convergence over land predominantly takes place at levels significantly colder than the surface; wherever moisture supply over land is limited, the increase of evaporation over land upon warming is limited, reducing the relative humidity in the boundary layer over land, and hence also enhancing the land/sea contrast. The non-linearity of the Clausius–Clapeyron relationship of saturation specific humidity to temperature is critical in (2) and (3). We examine the sensitivity of the land/sea contrast to model representations of different physical processes using a large ensemble of climate model integrations with perturbed parameters, and find that it is most sensitive to representation of large-scale cloud and stomatal closure. We discuss our results in the context of high-resolution and Earth-system modelling of climate change.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

FAMOUS fills an important role in the hierarchy of climate models, both explicitly resolving atmospheric and oceanic dynamics yet being sufficiently computationally efficient that either very long simulations or large ensembles are possible. An improved set of carbon cycle parameters for this model has been found using a perturbed physics ensemble technique. This is an important step towards building the "Earth System" modelling capability of FAMOUS, which is a reduced resolution, and hence faster running, version of the Hadley Centre Climate model, HadCM3. Two separate 100 member perturbed parameter ensembles were performed; one for the land surface and one for the ocean. The land surface scheme was tested against present-day and past representations of vegetation and the ocean ensemble was tested against observations of nitrate. An advantage of using a relatively fast climate model is that a large number of simulations can be run and hence the model parameter space (a large source of climate model uncertainty) can be more thoroughly sampled. This has the associated benefit of being able to assess the sensitivity of model results to changes in each parameter. The climatologies of surface and tropospheric air temperature and precipitation are improved relative to previous versions of FAMOUS. The improved representation of upper atmosphere temperatures is driven by improved ozone concentrations near the tropopause and better upper level winds.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Aeolian dust modelling has improved significantly over the last ten years and many institutions now consistently model dust uplift, transport and deposition in general circulation models (GCMs). However, the representation of dust in GCMs is highly variable between modelling communities due to differences in the uplift schemes employed and the representation of the global circulation that subsequently leads to dust deflation. In this study two different uplift schemes are incorporated in the same GCM. This approach enables a clearer comparison of the dust uplift schemes themselves, without the added complexity of several different transport and deposition models. The global annual mean dust aerosol optical depths (at 550 nm) using two different dust uplift schemes were found to be 0.014 and 0.023—both lying within the estimates from the AeroCom project. However, the models also have appreciably different representations of the dust size distribution adjacent to the West African coast and very different deposition at various sites throughout the globe. The different dust uplift schemes were also capable of influencing the modelled circulation, surface air temperature, and precipitation despite the use of prescribed sea surface temperatures. This has important implications for the use of dust models in AMIP-style (Atmospheric Modelling Intercomparison Project) simulations and Earth-system modelling.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The goal of the present research is to define a Semantic Web framework for precedent modelling, by using knowledge extracted from text, metadata, and rules, while maintaining a strong text-to-knowledge morphism between legal text and legal concepts, in order to fill the gap between legal document and its semantics. The framework is composed of four different models that make use of standard languages from the Semantic Web stack of technologies: a document metadata structure, modelling the main parts of a judgement, and creating a bridge between a text and its semantic annotations of legal concepts; a legal core ontology, modelling abstract legal concepts and institutions contained in a rule of law; a legal domain ontology, modelling the main legal concepts in a specific domain concerned by case-law; an argumentation system, modelling the structure of argumentation. The input to the framework includes metadata associated with judicial concepts, and an ontology library representing the structure of case-law. The research relies on the previous efforts of the community in the field of legal knowledge representation and rule interchange for applications in the legal domain, in order to apply the theory to a set of real legal documents, stressing the OWL axioms definitions as much as possible in order to enable them to provide a semantically powerful representation of the legal document and a solid ground for an argumentation system using a defeasible subset of predicate logics. It appears that some new features of OWL2 unlock useful reasoning features for legal knowledge, especially if combined with defeasible rules and argumentation schemes. The main task is thus to formalize legal concepts and argumentation patterns contained in a judgement, with the following requirement: to check, validate and reuse the discourse of a judge - and the argumentation he produces - as expressed by the judicial text.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El objetivo principal del presente proyecto es proporcionar al ingeniero de telecomunicaciones una visión general de las técnicas que se utilizan en el modelado del sistema auditivo. El modelado del sistema auditivo se realiza con los siguientes objetivos: a) Interpretar medidas directas, b)unificar el entendimiento de diferentes fenómenos, c) guiar estrategias de amplificación para suplir pérdidas auditivas y d) tener predicciones experimentalmente comprobables de comportamientos, con diferentes niveles de complejidad. En este trabajo se tratarán y explicarán brevemente las diferentes técnicas utilizadas para modelar las partes del sistema auditivo, desde las analogías electroacústicas, modelos biofísicos, binaurales, hasta la implementación de filtros auditivos mediante procesado de señal. Podemos concluir que el modelado mediante analogías electroacústicas permite una rápida implementación y entendimiento, pero tiene ciertas limitaciones. Las simulaciones mediante análisis numéricos son precisas y de gran utilidad tanto para del oído medio como para el interno. El procesado de señal es el procedimiento más completo y utilizado ya que permite modelar oído externo y medio además de permitir la implementación de filtros cocleares muy precisos y coherentes con la realidad incluyéndolos en modelos perceptivos. ABSTRACT. The main aim of the Project is to provide the Telecommunications Engineer an overview about the approaches for modelling the auditory system. The auditory system modelling is done for the next objectives: a) Interpret direct measures, b) Understand different phenomena c) get strategies of amplification for hearing impaired people and d) Obtain testable predictions experimentally about some behaviors with different complexity levels. Inside this document, several approaches about modeling of the auditory system parts will be explained: analog circuits, biophysics models, binaural models, and auditory filters made through signal processing. In conclusion, analog circuits are made quickly and they are easier to understand but they have many limitations. Simulations through numerical analysis are accurate and useful in middle and inner ear models. Signal processing is the more versatile approach because it lets to make a model of external and middle ear and then it allows to make complex auditory filters. Perceptive models can be made entirely through this method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis is concerned with Organisational Problem Solving. The work reflects the complexities of organisational problem situations and the eclectic approach that has been necessary to gain an understanding of the processes involved. The thesis is structured into three main parts. Part I describes the author's understanding of problems and suitable approaches. Chapter 2 identifies the Transcendental Realist (TR) view of science (Harre 1970, Bhaskar 1975) as the best general framework for identifying suitable approaches to complex organisational problems. Chapter 3 discusses the relationship between Checkland's methodology (1972) and TR. The need to generate iconic (explanatory) models of the problem situation is identified and the ability of viable system modelling to supplement the modelling stage of the methodology is explored in Chapter 4. Chapter 5 builds further on the methodology to produce an original iconic model of the methodological process. The model characterises the mechanisms of organisational problem situations as well as desirable procedural steps. The Weltanschauungen (W's) or "world views" of key actors is recognised as central to the mechanisms involved. Part II describes the experience which prompted the theoretical investigation. Chapter 6 describes the first year of the project. The success of this stage is attributed to the predominance of a single W. Chapter 7 describes the changes in the organisation which made the remaining phase of the project difficult. These difficulties are attributed to a failure to recognise the importance of differing W's. Part III revisits the theoretical and organisational issues. Chapter 8 identifies a range of techniques embodying W's which are compatible with .the framework of Part I and which might usefully supplement it. Chapter 9 characterises possible W's in the sponsoring organisation. Throughout the work, an attempt 1s made to reflect the process as well as the product of the author's leaving.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2016

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents the first multi vector energy analysis for the interconnected energy systems of Great Britain (GB) and Ireland. Both systems share a common high penetration of wind power, but significantly different security of supply outlooks. Ireland is heavily dependent on gas imports from GB, giving significance to the interconnected aspect of the methodology in addition to the gas and power interactions analysed. A fully realistic unit commitment and economic dispatch model coupled to an energy flow model of the gas supply network is developed. Extreme weather events driving increased domestic gas demand and low wind power output were utilised to increase gas supply network stress. Decreased wind profiles had a larger impact on system security than high domestic gas demand. However, the GB energy system was resilient during high demand periods but gas network stress limited the ramping capability of localised generating units. Additionally, gas system entry node congestion in the Irish system was shown to deliver a 40% increase in short run costs for generators. Gas storage was shown to reduce the impact of high demand driven congestion delivering a reduction in total generation costs of 14% in the period studied and reducing electricity imports from GB, significantly contributing to security of supply.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The wide adaptation of Internet Protocol (IP) as de facto protocol for most communication networks has established a need for developing IP capable data link layer protocol solutions for Machine to machine (M2M) and Internet of Things (IoT) networks. However, the wireless networks used for M2M and IoT applications usually lack the resources commonly associated with modern wireless communication networks. The existing IP capable data link layer solutions for wireless IoT networks provide the necessary overhead minimising and frame optimising features, but are often built to be compatible only with IPv6 and specific radio platforms. The objective of this thesis is to design IPv4 compatible data link layer for Netcontrol Oy's narrow band half-duplex packet data radio system. Based on extensive literature research, system modelling and solution concept testing, this thesis proposes the usage of tunslip protocol as the basis for the system data link layer protocol development. In addition to the functionality of tunslip, this thesis discusses the additional network, routing, compression, security and collision avoidance changes required to be made to the radio platform in order for it to be IP compatible while still being able to maintain the point-to-multipoint and multi-hop network characteristics. The data link layer design consists of the radio application, dynamic Maximum Transmission Unit (MTU) optimisation daemon and the tunslip interface. The proposed design uses tunslip for creating an IP capable data link protocol interface. The radio application receives data from tunslip and compresses the packets and uses the IP addressing information for radio network addressing and routing before forwarding the message to radio network. The dynamic MTU size optimisation daemon controls the tunslip interface maximum MTU size according to the link quality assessment calculated from the radio network diagnostic data received from the radio application. For determining the usability of tunslip as the basis for data link layer protocol, testing of the tunslip interface is conducted with both IEEE 802.15.4 radios and packet data radios. The test cases measure the radio network usability for User Datagram Protocol (UDP) based applications without applying any header or content compression. The test results for the packet data radios reveal that the typical success rate for packet reception through a single-hop link is above 99% with a round-trip-delay of 0.315s for 63B packets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tässä diplomityössä tarkastellaan täysin uusiutuvaa energiajärjestelmää Etelä-Karjalan maakunnan alueella, mikä onkin jo tällä hetkellä Suomen uusiutuvin maakunta. Diplomityössä tarkastellaan julkisen sektorin, liikenteen ja rakennusten energian kulutusta mutta teollisuuden energiankäyttö jätetään tarkastelun ulkopuolelle. Työssä tutustutaan tämän hetken Etelä-Karjalan energiajärjestelmään ja sen perusteella tehdään referenssi-skenaario. Tulevaisuuden skenaariot tehdään vuosille 2030 ja 2050. Tulevaisuuden skenaarioissa muutos keskittyy järjestelmän sähköistymiseen ja uusiutuvien tuotantomuotojen integroimiseen järjestelmään. Sähköistyminen kasvattaa sähkönkulutusta, joka pyritään kattamaan uusiutuvilla tuotantomuodoilla, lähinnä tuuli- ja aurinkovoimalla. Liikennesektori rajataan kumipyöräliikenteeseen ja sen muutos tulee olemaan haastavin ja aikaa vievin. Muutokseen pyritään liikennepolttoaineiden tuotannolla maakunnassa sekä sähköautoilulla. Uusiutuva energiajärjestelmä tarvitsee tuotannon ja kysynnän joustoa sekä älyä järjestelmältä. Työssä tarkastellaan myös järjestelmän kustannuksia sekä työllisyysvaikutuksia.