881 resultados para Minimization of open stack problem


Relevância:

100.00% 100.00%

Publicador:

Resumo:

En hidrodinámica, el fenómeno de Sloshing se puede definir como el movimiento de la superficie libre de un fluido dentro de un contenedor sometido a fuerzas y perturbaciones externas. El fluido en cuestión experimenta violentos movimientos con importantes deformaciones de su superficie libre. La dinámica del fluido puede llegar a generar cargas hidrodinámicas considerables las cuales pueden afectar la integridad estructural y/o comprometer la estabilidad del vehículo que transporta dicho contenedor. El fenómeno de Sloshing ha sido extensivamente investigado matemática, numérica y experimentalmente, siendo el enfoque experimental el más usado debido a la complejidad del problema, para el cual los modelos matemáticos y de simulación son aun incapaces de predecir con suficiente rapidez y precisión las cargas debidas a dicho fenómeno. El flujo generado por el Sloshing usualmente se caracteriza por la presencia de un fluido multifase (gas-liquido) y turbulencia. Reducir al máximo posible la complejidad del fenómeno de Sloshing sin perder la esencia del problema es el principal reto de esta tesis doctoral, donde un trabajo experimental enfocado en casos canónicos de Sloshing es presentado y documentado con el objetivo de aumentar la comprensión de dicho fenómeno y por tanto intentar proveer información valiosa para validaciones de códigos numéricos. El fenómeno de Sloshing juega un papel importante en la industria del transporte marítimo de gas licuado (LNG). El mercado de LNG en los últimos años ha reportado un crecimiento hasta tres veces mayor al de los mercados de petróleo y gas convencionales. Ingenieros en laboratorios de investigación e ingenieros adscritos a la industria del LNG trabajan continuamente buscando soluciones económicas y seguras para contener, transferir y transportar grandes volúmenes de LNG. Los buques transportadores de LNG (LNGC) han pasado de ser unos pocos buques con capacidad de 75000 m3 hace unos treinta años, a una amplia flota con una capacidad de 140000 m3 actualmente. En creciente número, hoy día se construyen buques con capacidades que oscilan entre 175000 m3 y 250000 m3. Recientemente un nuevo concepto de buque LNG ha salido al mercado y se le conoce como FLNG. Un FLNG es un buque de gran valor añadido que solventa los problemas de extracción, licuefacción y almacenamiento del LNG, ya que cuenta con equipos de extracción y licuefacción a bordo, eliminando por tanto las tareas de transvase de las estaciones de licuefacción en tierra hacia los buques LNGC. EL LNG por tanto puede ser transferido directamente desde el FLNG hacia los buques LNGC en mar abierto. Niveles de llenado intermedios en combinación con oleaje durante las operaciones de trasvase inducen movimientos en los buques que generan por tanto el fenómeno de Sloshing dentro de los tanques de los FLNG y los LNGC. El trabajo de esta tesis doctoral lidia con algunos de los problemas del Sloshing desde un punto de vista experimental y estadístico, para ello una serie de tareas, descritas a continuación, se han llevado a cabo : 1. Un dispositivo experimental de Sloshing ha sido configurado. Dicho dispositivo ha permitido ensayar secciones rectangulares de tanques LNGC a escala con movimientos angulares de un grado de libertad. El dispositivo experimental ha sido instrumentado para realizar mediciones de movimiento, presiones, vibraciones y temperatura, así como la grabación de imágenes y videos. 2. Los impactos de olas generadas dentro de una sección rectangular de un LNGC sujeto a movimientos regulares forzados han sido estudiados mediante la caracterización del fenómeno desde un punto de vista estadístico enfocado en la repetitividad y la ergodicidad del problema. 3. El estudio de los impactos provocados por movimientos regulares ha sido extendido a un escenario más realístico mediante el uso de movimientos irregulares forzados. 4. El acoplamiento del Sloshing generado por el fluido en movimiento dentro del tanque LNGC y la disipación de la energía mecánica de un sistema no forzado de un grado de libertad (movimiento angular) sujeto a una excitación externa ha sido investigado. 5. En la última sección de esta tesis doctoral, la interacción entre el Sloshing generado dentro en una sección rectangular de un tanque LNGC sujeto a una excitación regular y un cuerpo elástico solidario al tanque ha sido estudiado. Dicho estudio corresponde a un problema de interacción fluido-estructura. Abstract In hydrodynamics, we refer to sloshing as the motion of liquids in containers subjected to external forces with large free-surface deformations. The liquid motion dynamics can generate loads which may affect the structural integrity of the container and the stability of the vehicle that carries such container. The prediction of these dynamic loads is a major challenge for engineers around the world working on the design of both the container and the vehicle. The sloshing phenomenon has been extensively investigated mathematically, numerically and experimentally. The latter has been the most fruitful so far, due to the complexity of the problem, for which the numerical and mathematical models are still incapable of accurately predicting the sloshing loads. The sloshing flows are usually characterised by the presence of multiphase interaction and turbulence. Reducing as much as possible the complexity of the sloshing problem without losing its essence is the main challenge of this phd thesis, where experimental work on selected canonical cases are presented and documented in order to better understand the phenomenon and to serve, in some cases, as an useful information for numerical validations. Liquid sloshing plays a key roll in the liquified natural gas (LNG) maritime transportation. The LNG market growth is more than three times the rated growth of the oil and traditional gas markets. Engineers working in research laboratories and companies are continuously looking for efficient and safe ways for containing, transferring and transporting the liquified gas. LNG carrying vessels (LNGC) have evolved from a few 75000 m3 vessels thirty years ago to a huge fleet of ships with a capacity of 140000 m3 nowadays and increasing number of 175000 m3 and 250000 m3 units. The concept of FLNG (Floating Liquified Natural Gas) has appeared recently. A FLNG unit is a high value-added vessel which can solve the problems of production, treatment, liquefaction and storage of the LNG because the vessel is equipped with a extraction and liquefaction facility. The LNG is transferred from the FLNG to the LNGC in open sea. The combination of partial fillings and wave induced motions may generate sloshing flows inside both the LNGC and the FLNG tanks. This work has dealt with sloshing problems from a experimental and statistical point of view. A series of tasks have been carried out: 1. A sloshing rig has been set up. It allows for testing tanks with one degree of freedom angular motion. The rig has been instrumented to measure motions, pressure and conduct video and image recording. 2. Regular motion impacts inside a rectangular section LNGC tank model have been studied, with forced motion tests, in order to characterise the phenomenon from a statistical point of view by assessing the repeatability and practical ergodicity of the problem. 3. The regular motion analysis has been extended to an irregular motion framework in order to reproduce more realistic scenarios. 4. The coupled motion of a single degree of freedom angular motion system excited by an external moment and affected by the fluid moment and the mechanical energy dissipation induced by sloshing inside the tank has been investigated. 5. The last task of the thesis has been to conduct an experimental investigation focused on the strong interaction between a sloshing flow in a rectangular section of a LNGC tank subjected to regular excitation and an elastic body clamped to the tank. It is thus a fluid structure interaction problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The global economic structure, with its decentralized production and the consequent increase in freight traffic all over the world, creates considerable problems and challenges for the freight transport sector. This situation has led shipping to become the most suitable and cheapest way to transport goods. Thus, ports are configured as nodes with critical importance in the logistics supply chain as a link between two transport systems, sea and land. Increase in activity at seaports is producing three undesirable effects: increasing road congestion, lack of open space in port installations and a significant environmental impact on seaports. These adverse effects can be mitigated by moving part of the activity inland. Implementation of dry ports is a possible solution and would also provide an opportunity to strengthen intermodal solutions as part of an integrated and more sustainable transport chain, acting as a link between road and railway networks. In this sense, implementation of dry ports allows the separation of the links of the transport chain, thus facilitating the shortest possible routes for the lowest capacity and most polluting means of transport. Thus, the decision of where to locate a dry port demands a thorough analysis of the whole logistics supply chain, with the objective of transferring the largest volume of goods possible from road to more energy efficient means of transport, like rail or short-sea shipping, that are less harmful to the environment. However, the decision of where to locate a dry port must also ensure the sustainability of the site. Thus, the main goal of this article is to research the variables influencing the sustainability of dry port location and how this sustainability can be evaluated. With this objective, in this paper we present a methodology for assessing the sustainability of locations by the use of Multi-Criteria Decision Analysis (MCDA) and Bayesian Networks (BNs). MCDA is used as a way to establish a scoring, whilst BNs were chosen to eliminate arbitrariness in setting the weightings using a technique that allows us to prioritize each variable according to the relationships established in the set of variables. In order to determine the relationships between all the variables involved in the decision, giving us the importance of each factor and variable, we built a K2 BN algorithm. To obtain the scores of each variable, we used a complete cartography analysed by ArcGIS. Recognising that setting the most appropriate location to place a dry port is a geographical multidisciplinary problem, with significant economic, social and environmental implications, we consider 41 variables (grouped into 17 factors) which respond to this need. As a case of study, the sustainability of all of the 10 existing dry ports in Spain has been evaluated. In this set of logistics platforms, we found that the most important variables for achieving sustainability are those related to environmental protection, so the sustainability of the locations requires a great respect for the natural environment and the urban environment in which they are framed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work a p-adaptation (modification of the polynomial order) strategy based on the minimization of the truncation error is developed for high order discontinuous Galerkin methods. The truncation error is approximated by means of a truncation error estimation procedure and enables the identification of mesh regions that require adaptation. Three truncation error estimation approaches are developed and termed a posteriori, quasi-a priori and quasi-a priori corrected. Fine solutions, which are obtained by enriching the polynomial order, are required to solve the numerical problem with adequate accuracy. For the three truncation error estimation methods the former needs time converged solutions, while the last two rely on non-converged solutions, which lead to faster computations. Based on these truncation error estimation methods, algorithms for mesh adaptation were designed and tested. Firstly, an isotropic adaptation approach is presented, which leads to equally distributed polynomial orders in different coordinate directions. This first implementation is improved by incorporating a method to extrapolate the truncation error. This results in a significant reduction of computational cost. Secondly, the employed high order method permits the spatial decoupling of the estimated errors and enables anisotropic p-adaptation. The incorporation of anisotropic features leads to meshes with different polynomial orders in the different coordinate directions such that flow-features related to the geometry are resolved in a better manner. These adaptations result in a significant reduction of degrees of freedom and computational cost, while the amount of improvement depends on the test-case. Finally, this anisotropic approach is extended by using error extrapolation which leads to an even higher reduction in computational cost. These strategies are verified and compared in terms of accuracy and computational cost for the Euler and the compressible Navier-Stokes equations. The main result is that the two quasi-a priori methods achieve a significant reduction in computational cost when compared to a uniform polynomial enrichment. Namely, for a viscous boundary layer flow, we obtain a speedup of a factor of 6.6 and 7.6 for the quasi-a priori and quasi-a priori corrected approaches, respectively. RESUMEN En este trabajo se ha desarrollado una estrategia de adaptación-p (modificación del orden polinómico) para métodos Galerkin discontinuo de alto orden basada en la minimización del error de truncación. El error de truncación se estima utilizando el método tau-estimation. El estimador permite la identificación de zonas de la malla que requieren adaptación. Se distinguen tres técnicas de estimación: a posteriori, quasi a priori y quasi a priori con correción. Todas las estrategias requieren una solución obtenida en una malla fina, la cual es obtenida aumentando de manera uniforme el orden polinómico. Sin embargo, mientras que el primero requiere que esta solución esté convergida temporalmente, el resto utiliza soluciones no convergidas, lo que se traduce en un menor coste computacional. En este trabajo se han diseñado y probado algoritmos de adaptación de malla basados en métodos tau-estimation. En primer lugar, se presenta un algoritmo de adaptacin isótropo, que conduce a discretizaciones con el mismo orden polinómico en todas las direcciones espaciales. Esta primera implementación se mejora incluyendo un método para extrapolar el error de truncación. Esto resulta en una reducción significativa del coste computacional. En segundo lugar, el método de alto orden permite el desacoplamiento espacial de los errores estimados, permitiendo la adaptación anisotropica. Las mallas obtenidas mediante esta técnica tienen distintos órdenes polinómicos en cada una de las direcciones espaciales. La malla final tiene una distribución óptima de órdenes polinómicos, los cuales guardan relación con las características del flujo que, a su vez, depenen de la geometría. Estas técnicas de adaptación reducen de manera significativa los grados de libertad y el coste computacional. Por último, esta aproximación anisotropica se extiende usando extrapolación del error de truncación, lo que conlleva un coste computational aún menor. Las estrategias se verifican y se comparan en téminors de precisión y coste computacional utilizando las ecuaciones de Euler y Navier Stokes. Los dos métodos quasi a priori consiguen una reducción significativa del coste computacional en comparación con aumento uniforme del orden polinómico. En concreto, para una capa límite viscosa, obtenemos una mejora en tiempo de computación de 6.6 y 7.6 respectivamente, para las aproximaciones quasi-a priori y quasi-a priori con corrección.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mathematical underpinning of the pulse width modulation (PWM) technique lies in the attempt to represent “accurately” harmonic waveforms using only square forms of a fixed height. The accuracy can be measured using many norms, but the quality of the approximation of the analog signal (a harmonic form) by a digital one (simple pulses of a fixed high voltage level) requires the elimination of high order harmonics in the error term. The most important practical problem is in “accurate” reproduction of sine-wave using the same number of pulses as the number of high harmonics eliminated. We describe in this paper a complete solution of the PWM problem using Padé approximations, orthogonal polynomials, and solitons. The main result of the paper is the characterization of discrete pulses answering the general PWM problem in terms of the manifold of all rational solutions to Korteweg-de Vries equations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How do secretory proteins and other cargo targeted to post-Golgi locations traverse the Golgi stack? We report immunoelectron microscopy experiments establishing that a Golgi-restricted SNARE, GOS 28, is present in the same population of COPI vesicles as anterograde cargo marked by vesicular stomatitis virus glycoprotein, but is excluded from the COPI vesicles containing retrograde-targeted cargo (marked by KDEL receptor). We also report that GOS 28 and its partnering t-SNARE heavy chain, syntaxin 5, reside together in every cisterna of the stack. Taken together, these data raise the possibility that the anterograde cargo-laden COPI vesicles, retained locally by means of tethers, are inherently capable of fusing with neighboring cisternae on either side. If so, quanta of exported proteins would transit the stack in GOS 28–COPI vesicles via a bidirectional random walk, entering at the cis face and leaving at the trans face and percolating up and down the stack in between. Percolating vesicles carrying both post-Golgi cargo and Golgi residents up and down the stack would reconcile disparate observations on Golgi transport in cells and in cell-free systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Citizens demand more and more data for making decisions in their daily life. Therefore, mechanisms that allow citizens to understand and analyze linked open data (LOD) in a user-friendly manner are highly required. To this aim, the concept of Open Business Intelligence (OpenBI) is introduced in this position paper. OpenBI facilitates non-expert users to (i) analyze and visualize LOD, thus generating actionable information by means of reporting, OLAP analysis, dashboards or data mining; and to (ii) share the new acquired information as LOD to be reused by anyone. One of the most challenging issues of OpenBI is related to data mining, since non-experts (as citizens) need guidance during preprocessing and application of mining algorithms due to the complexity of the mining process and the low quality of the data sources. This is even worst when dealing with LOD, not only because of the different kind of links among data, but also because of its high dimensionality. As a consequence, in this position paper we advocate that data mining for OpenBI requires data quality-aware mechanisms for guiding non-expert users in obtaining and sharing the most reliable knowledge from the available LOD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of RGB-D sensors for mapping and recognition tasks in robotics or, in general, for virtual reconstruction has increased in recent years. The key aspect of these kinds of sensors is that they provide both depth and color information using the same device. In this paper, we present a comparative analysis of the most important methods used in the literature for the registration of subsequent RGB-D video frames in static scenarios. The analysis begins by explaining the characteristics of the registration problem, dividing it into two representative applications: scene modeling and object reconstruction. Then, a detailed experimentation is carried out to determine the behavior of the different methods depending on the application. For both applications, we used standard datasets and a new one built for object reconstruction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The multiobjective optimization model studied in this paper deals with simultaneous minimization of finitely many linear functions subject to an arbitrary number of uncertain linear constraints. We first provide a radius of robust feasibility guaranteeing the feasibility of the robust counterpart under affine data parametrization. We then establish dual characterizations of robust solutions of our model that are immunized against data uncertainty by way of characterizing corresponding solutions of robust counterpart of the model. Consequently, we present robust duality theorems relating the value of the robust model with the corresponding value of its dual problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multiobjective Generalized Disjunctive Programming (MO-GDP) optimization has been used for the synthesis of an important industrial process, isobutane alkylation. The two objective functions to be simultaneously optimized are the environmental impact, determined by means of LCA (Life Cycle Assessment), and the economic potential of the process. The main reason for including the minimization of the environmental impact in the optimization process is the widespread environmental concern by the general public. For the resolution of the problem we employed a hybrid simulation- optimization methodology, i.e., the superstructure of the process was developed directly in a chemical process simulator connected to a state of the art optimizer. The model was formulated as a GDP and solved using a logic algorithm that avoids the reformulation as MINLP -Mixed Integer Non Linear Programming-. Our research gave us Pareto curves compounded by three different configurations where the LCA has been assessed by two different parameters: global warming potential and ecoindicator-99.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Change Adaptation: Open or Closed? Paper read at the Second African International Economic Law Network Conference, 7-8 March 2013, Wits School of Law, Johannesburg, South Africa. In a time of rapid convergence of technologies, goods, services, hardware, software, the traditional classifications that informed past treaties fail to remove legal uncertainty, or advance welfare and innovation. As a result, we turn our attention to the role and needs of the public domain at the interface of existing intellectual property rights and new modes of creation, production and distribution of goods and services. The concept of open culture would have it that knowledge should be spread freely and its growth should come from further developing existing works on the basis of sharing and collaboration without the shackles of intellectual property. Intellectual property clauses find their way into regional, multilateral, bilateral and free trade agreements more often than not, and can cause public discontent and incite unrest. Many of these intellectual property clauses raise the bar on protection beyond the clauses found in the WTO Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS). In this paper we address the question of the protection and development of the public domain in service of open innovation in accord with Article 15 of the International Covenant on Economic, Social and Cultural Rights (ICESCR) in light of the Objectives (Article 7) and Principles (Article 8) set forth in TRIPS. Once areas of divergence and reinforcement between the intellectual property regime and human rights have been discussed, we will enter into options that allow for innovation and prosperity in the global south. We then conclude by discussing possible policy developments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a 5.3-Myr stack (the ''LR04'' stack) of benthic d18O records from 57 globally distributed sites aligned by an automated graphic correlation algorithm. This is the first benthic delta18O stack composed of more than three records to extend beyond 850 ka, and we use its improved signal quality to identify 24 new marine isotope stages in the early Pliocene. We also present a new LR04 age model for the Pliocene-Pleistocene derived from tuning the delta18O stack to a simple ice model based on 21 June insolation at 65 N. Stacked sedimentation rates provide additional age model constraints to prevent overtuning. Despite a conservative tuning strategy, the LR04 benthic stack exhibits significant coherency with insolation in the obliquity band throughout the entire 5.3 Myr and in the precession band for more than half of the record. The LR04 stack contains significantly more variance in benthic delta18O than previously published stacks of the late Pleistocene as the result of higher resolution records, a better alignment technique, and a greater percentage of records from the Atlantic. Finally, the relative phases of the stack's 41- and 23-kyr components suggest that the precession component of delta18O from 2.7-1.6 Ma is primarily a deep-water temperature signal and that the phase of d18O precession response changed suddenly at 1.6 Ma.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As with all new ideas, the concept of Open Innovation requires extensive empirical investigation, testing and development. This paper analyzes Procter and Gamble's 'Connect and Develop' strategy as a case study of the major organizational and technological changes associated with open innovation. It argues that although some of the organizational changes accompanying open innovation are beginning to be described in the literature, more analysis is warranted into the ways technological changes have facilitated open innovation strategies, particularly related to new product development. Information and communications technologies enable the exchange of distributed sources of information in the open innovation process. The case study shows that furthermore a suite of new technologies for data mining, simulation, prototyping and visual representation, what we call 'innovation technology', help to support open innovation in Procter and Gamble. The paper concludes with a suggested research agenda for furthering understanding of the role played by and consequences of this technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The estimation of a concentration-dependent diffusion coefficient in a drying process is known as an inverse coefficient problem. The solution is sought wherein the space-average concentration is known as function of time (mass loss monitoring). The problem is stated as the minimization of a functional and gradient-based algorithms are used to solve it. Many numerical and experimental examples that demonstrate the effectiveness of the proposed approach are presented. Thin slab drying was carried out in an isothermal drying chamber built in our laboratory. The diffusion coefficients of fructose obtained with the present method are compared with existing literature results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Creativity is increasingly recognised as an essential component of engineering design. This paper describes an exploratory study into the nature and importance of creativity in engineering design problem solving in relation to the possible impact of software design tools. The first stage of the study involved an empirical investigation in the form of a case study of the use of standard CAD tool sets and the development of a systems engineering software support tool. It was found that there were several ways in which CAD influenced the creative process, including enhancing visualisation and communication, premature fixation, circumscribed thinking and bounded ideation. The tool development experience uncovered the difficulty in supporting creative processes from the developer's perspective. The issues were the necessity of making assumptions, achieving a balance between structure and flexibility, and the pitfalls of satisfying user wants and needs. The second part of the study involved the development of a model of the creative problem solving process in engineering design. This provided a possible explanation for why purpose designed engineering software tools might encourage an analytical problem solving approach and discourage a more creative approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This is a study of heat transfer in a lift-off furnace which is employed in the batch annealing of a stack of coils of steel strip. The objective of the project is to investigate the various factors which govern the furnace design and the heat transfer resistances, so as to reduce the time of the annealing cycle, and hence minimize the operating costs. The work involved mathematical modelling of patterns of gas flow and modes of heat transfer. These models are: Heat conduction and its conjectures in the steel coils;Convective heat transfer in the plates separating the coils in the stack and in other parts of the furnace; and Radiative and convective heat transfer in the furnace by using the long furnace model. An important part of the project is the development of numerical methods and computations to solve the transient models. A limited number of temperature measurements was available from experiments on a test coil in an industrial furnace. The mathematical model agreed well with these data. The model has been used to show the following characteristics of annealing furnaces, and to suggest further developments which would lead to significant savings: - The location of the limiting temperature in a coil is nearer to the hollow core than to the outer periphery. - Thermal expansion of the steel tends to open the coils, reduces their thermal conductivity in the radial direction, and hence prolongs the annealing cycle. Increasing the tension in the coils and/or heating from the core would overcome this heat transfer resistance. - The shape and dimensions of the convective channels in the plates have significant effect on heat convection in the stack. An optimal design of a channel is shown to be of a width-to-height ratio equal to 9. - Increasing the cooling rate, by using a fluidized bed instead of the normal shell and tube exchanger, would shorten the cooling time by about 15%, but increase the temperature differential in the stack. - For a specific charge weight, a stack of different-sized coils will have a shorter annealing cycle than one of equally-sized coils, provided that production constraints allow the stacking order to be optimal. - Recycle of hot flue gases to the firing zone of the furnace would produce a. decrease in the thermal efficiency up to 30% but decreases the heating time by about 26%.