843 resultados para “Hybrid” implementation model
Resumo:
Provenance plays a pivotal in tracing the origin of something and determining how and why something had occurred. With the emergence of the cloud and the benefits it encompasses, there has been a rapid proliferation of services being adopted by commercial and government sectors. However, trust and security concerns for such services are on an unprecedented scale. Currently, these services expose very little internal working to their customers; this can cause accountability and compliance issues especially in the event of a fault or error, customers and providers are left to point finger at each other. Provenance-based traceability provides a mean to address part of this problem by being able to capture and query events occurred in the past to understand how and why it took place. However, due to the complexity of the cloud infrastructure, the current provenance models lack the expressibility required to describe the inner-working of a cloud service. For a complete solution, a provenance-aware policy language is also required for operators and users to define policies for compliance purpose. The current policy standards do not cater for such requirement. To address these issues, in this paper we propose a provenance (traceability) model cProv, and a provenance-aware policy language (cProvl) to capture traceability data, and express policies for validating against the model. For implementation, we have extended the XACML3.0 architecture to support provenance, and provided a translator that converts cProvl policy and request into XACML type.
Resumo:
The erosion processes resulting from flow of fluids (gas-solid or liquid-solid) are encountered in nature and many industrial processes. The common feature of these erosion processes is the interaction of the fluid (particle) with its boundary thus resulting in the loss of material from the surface. This type of erosion in detrimental to the equipment used in pneumatic conveying systems. The puncture of pneumatic conveyor bends in industry causes several problems. Some of which are: (1) Escape of the conveyed product causing health and dust hazard; (2) Repairing and cleaning up after punctures necessitates shutting down conveyors, which will affect the operation of the plant, thus reducing profitability. The most common occurrence of process failure in pneumatic conveying systems is when pipe sections at the bends wear away and puncture. The reason for this is particles of varying speed, shape, size and material properties strike the bend wall with greater intensity than in straight sections of the pipe. Currently available models for predicting the lifetime of bends are inaccurate (over predict by 80%. The provision of an accurate predictive method would lead to improvements in the structure of the planned maintenance programmes of processes, thus reducing unplanned shutdowns and ultimately the downtime costs associated with these unplanned shutdowns. This is the main motivation behind the current research. The paper reports on two aspects of the first phases of the study-undertaken for the current project. These are (1) Development and implementation; and (2) Testing of the modelling environment. The model framework encompasses Computational Fluid Dynamics (CFD) related engineering tools, based on Eulerian (gas) and Lagrangian (particle) approaches to represent the two distinct conveyed phases, to predict the lifetime of conveyor bends. The method attempts to account for the effect of erosion on the pipe wall via particle impacts, taking into account the angle of attack, impact velocity, shape/size and material properties of the wall and conveyed material, within a CFD framework. Only a handful of researchers use CFD as the basis of predicting the particle motion, see for example [1-4] . It is hoped that this would lead to more realistic predictions of the wear profile. Results, for two, three-dimensional test cases using the commercially available CFD PHOENICS are presented. These are reported in relation to the impact intensity and sensitivity to the inlet particle distributions.
Resumo:
This work introduces a tessellation-based model for the declivity analysis of geographic regions. The analysis of the relief declivity, which is embedded in the rules of the model, categorizes each tessellation cell, with respect to the whole considered region, according to the (positive, negative, null) sign of the declivity of the cell. Such information is represented in the states assumed by the cells of the model. The overall configuration of such cells allows the division of the region into subregions of cells belonging to a same category, that is, presenting the same declivity sign. In order to control the errors coming from the discretization of the region into tessellation cells, or resulting from numerical computations, interval techniques are used. The implementation of the model is naturally parallel since the analysis is performed on the basis of local rules. An immediate application is in geophysics, where an adequate subdivision of geographic areas into segments presenting similar topographic characteristics is often convenient.
Resumo:
Creative ways of utilising renewable energy sources in electricity generation especially in remote areas and particularly in countries depending on imported energy, while increasing energy security and reducing cost of such isolated off-grid systems, is becoming an urgently needed necessity for the effective strategic planning of Energy Systems. The aim of this research project was to design and implement a new decision support framework for the optimal design of hybrid micro grids considering different types of different technologies, where the design objective is to minimize the total cost of the hybrid micro grid while at the same time satisfying the required electric demand. Results of a comprehensive literature review, of existing analytical, decision support tools and literature on HPS, has identified the gaps and the necessary conceptual parts of an analytical decision support framework. As a result this research proposes and reports an Iterative Analytical Design Framework (IADF) and its implementation for the optimal design of an Off-grid renewable energy based hybrid smart micro-grid (OGREH-SμG) with intra and inter-grid (μG2μG & μG2G) synchronization capabilities and a novel storage technique. The modelling design and simulations were based on simulations conducted using HOMER Energy and MatLab/SIMULINK, Energy Planning and Design software platforms. The design, experimental proof of concept, verification and simulation of a new storage concept incorporating Hydrogen Peroxide (H2O2) fuel cell is also reported. The implementation of the smart components consisting Raspberry Pi that is devised and programmed for the semi-smart energy management framework (a novel control strategy, including synchronization capabilities) of the OGREH-SμG are also detailed and reported. The hybrid μG was designed and implemented as a case study for the Bayir/Jordan area. This research has provided an alternative decision support tool to solve Renewable Energy Integration for the optimal number, type and size of components to configure the hybrid μG. In addition this research has formulated and reported a linear cost function to mathematically verify computer based simulations and fine tune the solutions in the iterative framework and concluded that such solutions converge to a correct optimal approximation when considering the properties of the problem. As a result of this investigation it has been demonstrated that, the implemented and reported OGREH-SμG design incorporates wind and sun powered generation complemented with batteries, two fuel cell units and a diesel generator is a unique approach to Utilizing indigenous renewable energy with a capability of being able to synchronize with other μ-grids is the most effective and optimal way of electrifying developing countries with fewer resources in a sustainable way, with minimum impact on the environment while also achieving reductions in GHG. The dissertation concludes with suggested extensions to this work in the future.
Resumo:
The role of computer modeling has grown recently to integrate itself as an inseparable tool to experimental studies for the optimization of automotive engines and the development of future fuels. Traditionally, computer models rely on simplified global reaction steps to simulate the combustion and pollutant formation inside the internal combustion engine. With the current interest in advanced combustion modes and injection strategies, this approach depends on arbitrary adjustment of model parameters that could reduce credibility of the predictions. The purpose of this study is to enhance the combustion model of KIVA, a computational fluid dynamics code, by coupling its fluid mechanics solution with detailed kinetic reactions solved by the chemistry solver, CHEMKIN. As a result, an engine-friendly reaction mechanism for n-heptane was selected to simulate diesel oxidation. Each cell in the computational domain is considered as a perfectly-stirred reactor which undergoes adiabatic constant- volume combustion. The model was applied to an ideally-prepared homogeneous- charge compression-ignition combustion (HCCI) and direct injection (DI) diesel combustion. Ignition and combustion results show that the code successfully simulates the premixed HCCI scenario when compared to traditional combustion models. Direct injection cases, on the other hand, do not offer a reliable prediction mainly due to the lack of turbulent-mixing model, inherent in the perfectly-stirred reactor formulation. In addition, the model is sensitive to intake conditions and experimental uncertainties which require implementation of enhanced predictive tools. It is recommended that future improvements consider turbulent-mixing effects as well as optimization techniques to accurately simulate actual in-cylinder process with reduced computational cost. Furthermore, the model requires the extension of existing fuel oxidation mechanisms to include pollutant formation kinetics for emission control studies.
Resumo:
We revisit the visibility problem, which is traditionally known in Computer Graphics and Vision fields as the process of computing a (potentially) visible set of primitives in the computational model of a scene. We propose a hybrid solution that uses a dry structure (in the sense of data reduction), a triangulation of the type , to accelerate the task of searching for visible primitives. We came up with a solution that is useful for real-time, on-line, interactive applications as 3D visualization. In such applications the main goal is to load the minimum amount of primitives from the scene during the rendering stage, as possible. For this purpose, our algorithm executes the culling by using a hybrid paradigm based on viewing-frustum, back-face culling and occlusion models. Results have shown substantial improvement over these traditional approaches if applied separately. This novel approach can be used in devices with no dedicated processors or with low processing power, as cell phones or embedded displays, or to visualize data through the Internet, as in virtual museums applications.
Resumo:
The analysis of steel and composite frames has traditionally been carried out by idealizing beam-to-column connections as either rigid or pinned. Although some advanced analysis methods have been proposed to account for semi-rigid connections, the performance of these methods strongly depends on the proper modeling of connection behavior. The primary challenge of modeling beam-to-column connections is their inelastic response and continuously varying stiffness, strength, and ductility. In this dissertation, two distinct approaches—mathematical models and informational models—are proposed to account for the complex hysteretic behavior of beam-to-column connections. The performance of the two approaches is examined and is then followed by a discussion of their merits and deficiencies. To capitalize on the merits of both mathematical and informational representations, a new approach, a hybrid modeling framework, is developed and demonstrated through modeling beam-to-column connections. Component-based modeling is a compromise spanning two extremes in the field of mathematical modeling: simplified global models and finite element models. In the component-based modeling of angle connections, the five critical components of excessive deformation are identified. Constitutive relationships of angles, column panel zones, and contact between angles and column flanges, are derived by using only material and geometric properties and theoretical mechanics considerations. Those of slip and bolt hole ovalization are simplified by empirically-suggested mathematical representation and expert opinions. A mathematical model is then assembled as a macro-element by combining rigid bars and springs that represent the constitutive relationship of components. Lastly, the moment-rotation curves of the mathematical models are compared with those of experimental tests. In the case of a top-and-seat angle connection with double web angles, a pinched hysteretic response is predicted quite well by complete mechanical models, which take advantage of only material and geometric properties. On the other hand, to exhibit the highly pinched behavior of a top-and-seat angle connection without web angles, a mathematical model requires components of slip and bolt hole ovalization, which are more amenable to informational modeling. An alternative method is informational modeling, which constitutes a fundamental shift from mathematical equations to data that contain the required information about underlying mechanics. The information is extracted from observed data and stored in neural networks. Two different training data sets, analytically-generated and experimental data, are tested to examine the performance of informational models. Both informational models show acceptable agreement with the moment-rotation curves of the experiments. Adding a degradation parameter improves the informational models when modeling highly pinched hysteretic behavior. However, informational models cannot represent the contribution of individual components and therefore do not provide an insight into the underlying mechanics of components. In this study, a new hybrid modeling framework is proposed. In the hybrid framework, a conventional mathematical model is complemented by the informational methods. The basic premise of the proposed hybrid methodology is that not all features of system response are amenable to mathematical modeling, hence considering informational alternatives. This may be because (i) the underlying theory is not available or not sufficiently developed, or (ii) the existing theory is too complex and therefore not suitable for modeling within building frame analysis. The role of informational methods is to model aspects that the mathematical model leaves out. Autoprogressive algorithm and self-learning simulation extract the missing aspects from a system response. In a hybrid framework, experimental data is an integral part of modeling, rather than being used strictly for validation processes. The potential of the hybrid methodology is illustrated through modeling complex hysteretic behavior of beam-to-column connections. Mechanics-based components of deformation such as angles, flange-plates, and column panel zone, are idealized to a mathematical model by using a complete mechanical approach. Although the mathematical model represents envelope curves in terms of initial stiffness and yielding strength, it is not capable of capturing the pinching effects. Pinching is caused mainly by separation between angles and column flanges as well as slip between angles/flange-plates and beam flanges. These components of deformation are suitable for informational modeling. Finally, the moment-rotation curves of the hybrid models are validated with those of the experimental tests. The comparison shows that the hybrid models are capable of representing the highly pinched hysteretic behavior of beam-to-column connections. In addition, the developed hybrid model is successfully used to predict the behavior of a newly-designed connection.
Resumo:
As the formative agents of cloud droplets, aerosols play an undeniably important role in the development of clouds and precipitation. Few meteorological models have been developed or adapted to simulate aerosols and their contribution to cloud and precipitation processes. The Weather Research and Forecasting model (WRF) has recently been coupled with an atmospheric chemistry suite and is jointly referred to as WRF-Chem, allowing atmospheric chemistry and meteorology to influence each other’s evolution within a mesoscale modeling framework. Provided that the model physics are robust, this framework allows the feedbacks between aerosol chemistry, cloud physics, and dynamics to be investigated. This study focuses on the effects of aerosols on meteorology, specifically, the interaction of aerosol chemical species with microphysical processes represented within the framework of the WRF-Chem. Aerosols are represented by eight size bins using the Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) sectional parameterization, which is linked to the Purdue Lin bulk microphysics scheme. The aim of this study is to examine the sensitivity of deep convective precipitation modeled by the 2D WRF-Chem to varying aerosol number concentration and aerosol type. A systematic study has been performed regarding the effects of aerosols on parameters such as total precipitation, updraft/downdraft speed, distribution of hydrometeor species, and organizational features, within idealized maritime and continental thermodynamic environments. Initial results were obtained using WRFv3.0.1, and a second series of tests were run using WRFv3.2 after several changes to the activation, autoconversion, and Lin et al. microphysics schemes added by the WRF community, as well as the implementation of prescribed vertical levels by the author. The results of WRFv3.2 runs contrasted starkly with WRFv3.0.1 runs. The WRFv3.0.1 runs produced a propagating system resembling a developing squall line, whereas the WRFv3.2 runs did not. The response of total precipitation, updraft/downdraft speeds, and system organization to increasing aerosol concentrations were opposite between runs with different versions of WRF. Results of the WRFv3.2 runs, however, were in better agreement in timing and magnitude of vertical velocity and hydrometeor content with a WRFv3.0.1 run using single-moment Lin et al. microphysics, than WRFv3.0.1 runs with chemistry. One result consistent throughout all simulations was an inhibition in warm-rain processes due to enhanced aerosol concentrations, which resulted in a delay of precipitation onset that ranged from 2-3 minutes in WRFv3.2 runs, and up to 15 minutes in WRFv.3.0.1 runs. This result was not observed in a previous study by Ntelekos et al. (2009) using the WRF-Chem, perhaps due to their use of coarser horizontal and vertical resolution within their experiment. The changes to microphysical processes such as activation and autoconversion from WRFv3.0.1 to WRFv3.2, along with changes in the packing of vertical levels, had more impact than the varying aerosol concentrations even though the range of aerosol tested was greater than that observed in field studies. In order to take full advantage of the input of aerosols now offered by the chemistry module in WRF, the author recommends that a fully double-moment microphysics scheme be linked, rather than the limited double-moment Lin et al. scheme that currently exists. With this modification, the WRF-Chem will be a powerful tool for studying aerosol-cloud interactions and allow comparison of results with other studies using more modern and complex microphysical parameterizations.
Resumo:
Wydział Prawa i Administracji: Katedra Prawa Konstytucyjnego
Resumo:
Recent developments in automation, robotics and artificial intelligence have given a push to a wider usage of these technologies in recent years, and nowadays, driverless transport systems are already state-of-the-art on certain legs of transportation. This has given a push for the maritime industry to join the advancement. The case organisation, AAWA initiative, is a joint industry-academia research consortium with the objective of developing readiness for the first commercial autonomous solutions, exploiting state-of-the-art autonomous and remote technology. The initiative develops both autonomous and remote operation technology for navigation, machinery, and all on-board operating systems. The aim of this study is to develop a model with which to estimate and forecast the operational costs, and thus enable comparisons between manned and autonomous cargo vessels. The building process of the model is also described and discussed. Furthermore, the model’s aim is to track and identify the critical success factors of the chosen ship design, and to enable monitoring and tracking of the incurred operational costs as the life cycle of the vessel progresses. The study adopts the constructive research approach, as the aim is to develop a construct to meet the needs of a case organisation. Data has been collected through discussions and meeting with consortium members and researchers, as well as through written and internal communications material. The model itself is built using activity-based life cycle costing, which enables both realistic cost estimation and forecasting, as well as the identification of critical success factors due to the process-orientation adopted from activity-based costing and the statistical nature of Monte Carlo simulation techniques. As the model was able to meet the multiple aims set for it, and the case organisation was satisfied with it, it could be argued that activity-based life cycle costing is the method with which to conduct cost estimation and forecasting in the case of autonomous cargo vessels. The model was able to perform the cost analysis and forecasting, as well as to trace the critical success factors. Later on, it also enabled, albeit hypothetically, monitoring and tracking of the incurred costs. By collecting costs this way, it was argued that the activity-based LCC model is able facilitate learning from and continuous improvement of the autonomous vessel. As with the building process of the model, an individual approach was chosen, while still using the implementation and model building steps presented in existing literature. This was due to two factors: the nature of the model and – perhaps even more importantly – the nature of the case organisation. Furthermore, the loosely organised network structure means that knowing the case organisation and its aims is of great importance when conducting a constructive research.
Resumo:
The share of variable renewable energy in electricity generation has seen exponential growth during the recent decades, and due to the heightened pursuit of environmental targets, the trend is to continue with increased pace. The two most important resources, wind and insolation both bear the burden of intermittency, creating a need for regulation and posing a threat to grid stability. One possibility to deal with the imbalance between demand and generation is to store electricity temporarily, which was addressed in this thesis by implementing a dynamic model of adiabatic compressed air energy storage (CAES) with Apros dynamic simulation software. Based on literature review, the existing models due to their simplifications were found insufficient for studying transient situations, and despite of its importance, the investigation of part load operation has not yet been possible with satisfactory precision. As a key result of the thesis, the cycle efficiency at design point was simulated to be 58.7%, which correlated well with literature information, and was validated through analytical calculations. The performance at part load was validated against models shown in literature, showing good correlation. By introducing wind resource and electricity demand data to the model, grid operation of CAES was studied. In order to enable the dynamic operation, start-up and shutdown sequences were approximated in dynamic environment, as far as is known, the first time, and a user component for compressor variable guide vanes (VGV) was implemented. Even in the current state, the modularly designed model offers a framework for numerous studies. The validity of the model is limited by the accuracy of VGV correlations at part load, and in addition the implementation of heat losses to the thermal energy storage is necessary to enable longer simulations. More extended use of forecasts is one of the important targets of development, if the system operation is to be optimised in future.
Arquitetura híbrida com DSP e FPGA para implementação de controladores de filtros ativos de potência
Resumo:
The presence of non-linear loads at a point in the distribution system may deform voltage waveform due to the consumption of non-sinusoidal currents. The use of active power filters allows significant reduction of the harmonic content in the supply current. However, the processing of digital control structures for these filters may require high performance hardware, particularly for reference currents calculation. This work describes the development of hardware structures with high processing capability for application in active power filters. In this sense, it considers an architecture that allows parallel processing using programmable logic devices. The developed structure uses a hybrid model using a DSP and an FPGA. The DSP is used for the acquisition of current and voltage signals, calculation of fundamental current related controllers and PWM generation. The FPGA is used for intensive signal processing, such as the harmonic compensators. In this way, from the experimental analysis, significant reductions of the processing time are achieved when compared to traditional approaches using only DSP. The experimental results validate the designed structure and these results are compared with other ones from architectures reported in the literature.
Resumo:
This paper provides a new reading of a classical economic relation: the short-run Phillips curve. Our point is that, when dealing with inflation and unemployment, policy-making can be understood as a multicriteria decisionmaking problem. Hence, we use so-called multiobjective programming in connection with a computable general equilibrium (CGE) model to determine the combinations of policy instruments that provide efficient combinations of inflation and unemployment. This approach results in an alternative version of the Phillips curve labelled as efficient Phillips curve. Our aim is to present an application of CGE models to a new area of research that can be especially useful when addressing policy exercises with real data. We apply our methodological proposal within a particular regional economy, Andalusia, in the south of Spain. This tool can give some keys for policy advice and policy implementation in the fight against unemployment and inflation.
Resumo:
En este trabajo se propone un nuevo sistema híbrido para el análisis de sentimientos en clase múltiple basado en el uso del diccionario General Inquirer (GI) y un enfoque jerárquico del clasificador Logistic Model Tree (LMT). Este nuevo sistema se compone de tres capas, la capa bipolar (BL) que consta de un LMT (LMT-1) para la clasificación de la polaridad de sentimientos, mientras que la segunda capa es la capa de la Intensidad (IL) y comprende dos LMTs (LMT-2 y LMT3) para detectar por separado tres intensidades de sentimientos positivos y tres intensidades de sentimientos negativos. Sólo en la fase de construcción, la capa de Agrupación (GL) se utiliza para agrupar las instancias positivas y negativas mediante el empleo de 2 k-means, respectivamente. En la fase de Pre-procesamiento, los textos son segmentados por palabras que son etiquetadas, reducidas a sus raíces y sometidas finalmente al diccionario GI con el objetivo de contar y etiquetar sólo los verbos, los sustantivos, los adjetivos y los adverbios con 24 marcadores que se utilizan luego para calcular los vectores de características. En la fase de Clasificación de Sentimientos, los vectores de características se introducen primero al LMT-1, a continuación, se agrupan en GL según la etiqueta de clase, después se etiquetan estos grupos de forma manual, y finalmente las instancias positivas son introducidas a LMT-2 y las instancias negativas a LMT-3. Los tres árboles están entrenados y evaluados usando las bases de datos Movie Review y SenTube con validación cruzada estratificada de 10-pliegues. LMT-1 produce un árbol de 48 hojas y 95 de tamaño, con 90,88% de exactitud, mientras que tanto LMT-2 y LMT-3 proporcionan dos árboles de una hoja y uno de tamaño, con 99,28% y 99,37% de exactitud,respectivamente. Los experimentos muestran que la metodología de clasificación jerárquica propuesta da un mejor rendimiento en comparación con otros enfoques prevalecientes.
Resumo:
In this paper we present the development and the implementation of a content analysis model for observing aspects relating to the social mission of the public library on Facebook pages and websites. The model is unique and it was developed from the literature. There were designed the four categories for analysis Generate social capital and social cohesion, Consolidate democracy and citizenship, Social and digital inclusion and Fighting illiteracies. The model enabled the collection and the analysis of data applied to a case study consisting of 99 Portuguese public libraries with Facebook page. With this model of content analysis we observed the facets of social mission and we read the actions with social facets on the Facebook page and in the websites of public libraries. At the end we discuss in parallel the results of observation of the Facebook of libraries and the websites. By reading the description of the actions of the social mission, the general conclusion and the most immediate is that 99 public libraries on Facebook and websites rarely publish social character actions, and the results are little satisfying. The Portuguese public libraries highlight substantially the actions in the category Generate social capital and social cohesion.