982 resultados para Location-aware process modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The blast furnace is the main ironmaking production unit in the world which converts iron ore with coke and hot blast into liquid iron, hot metal, which is used for steelmaking. The furnace acts as a counter-current reactor charged with layers of raw material of very different gas permeability. The arrangement of these layers, or burden distribution, is the most important factor influencing the gas flow conditions inside the furnace, which dictate the efficiency of the heat transfer and reduction processes. For proper control the furnace operators should know the overall conditions in the furnace and be able to predict how control actions affect the state of the furnace. However, due to high temperatures and pressure, hostile atmosphere and mechanical wear it is very difficult to measure internal variables. Instead, the operators have to rely extensively on measurements obtained at the boundaries of the furnace and make their decisions on the basis of heuristic rules and results from mathematical models. It is particularly difficult to understand the distribution of the burden materials because of the complex behavior of the particulate materials during charging. The aim of this doctoral thesis is to clarify some aspects of burden distribution and to develop tools that can aid the decision-making process in the control of the burden and gas distribution in the blast furnace. A relatively simple mathematical model was created for simulation of the distribution of the burden material with a bell-less top charging system. The model developed is fast and it can therefore be used by the operators to gain understanding of the formation of layers for different charging programs. The results were verified by findings from charging experiments using a small-scale charging rig at the laboratory. A basic gas flow model was developed which utilized the results of the burden distribution model to estimate the gas permeability of the upper part of the blast furnace. This combined formulation for gas and burden distribution made it possible to implement a search for the best combination of charging parameters to achieve a target gas temperature distribution. As this mathematical task is discontinuous and non-differentiable, a genetic algorithm was applied to solve the optimization problem. It was demonstrated that the method was able to evolve optimal charging programs that fulfilled the target conditions. Even though the burden distribution model provides information about the layer structure, it neglects some effects which influence the results, such as mixed layer formation and coke collapse. A more accurate numerical method for studying particle mechanics, the Discrete Element Method (DEM), was used to study some aspects of the charging process more closely. Model charging programs were simulated using DEM and compared with the results from small-scale experiments. The mixed layer was defined and the voidage of mixed layers was estimated. The mixed layer was found to have about 12% less voidage than layers of the individual burden components. Finally, a model for predicting the extent of coke collapse when heavier pellets are charged over a layer of lighter coke particles was formulated based on slope stability theory, and was used to update the coke layer distribution after charging in the mathematical model. In designing this revision, results from DEM simulations and charging experiments for some charging programs were used. The findings from the coke collapse analysis can be used to design charging programs with more stable coke layers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, Caspian Sea is in focus of more attentions than past because of its individualistic as the biggest lake in the world and the existing of very large oil and gas resources within it. Very large scale of oil pollution caused by development of oil exploration and excavation activities not only make problem for coastal facilities but also make severe damage on environment. In the first stage of this research, the location and quality of oil resources in offshore and onshore have been determined and then affected depletion factors on oil spill such as evaporation, emulsification, dissolution, sedimentation and so on have been studied. In second stage, sea hydrodynamics model is offered and tested by determination of governing hydrodynamic equations on sea currents and on pollution transportation in sea surface and by finding out main parameters in these equations such as Coriolis, bottom friction, wind and etc. this model has been calculated by using cell vertex finite volume method in an unstructured mesh domain. According to checked model; sea currents of Caspian Sea in different seasons of the year have been determined and in final stage different scenarios of oil spill movement in Caspian sea on various conditions have been investigated by modeling of three dimensional oil spill movement on surface (affected by sea currents) and on depth (affected by buoyancy, drag and gravity forces) by applying main above mentioned depletion factors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The spike-diffuse-spike (SDS) model describes a passive dendritic tree with active dendritic spines. Spine-head dynamics is modelled with a simple integrate-and-fire process, whilst communication between spines is mediated by the cable equation. Here we develop a computational framework that allows the study of multiple spiking events in a network of such spines embedded in a simple one-dimensional cable. This system is shown to support saltatory waves as a result of the discrete distribution of spines. Moreover, we demonstrate one of the ways to incorporate noise into the spine-head whilst retaining computational tractability of the model. The SDS model sustains a variety of propagating patterns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The production of artistic prints in the sixteenth- and seventeenth-century Netherlands was an inherently social process. Turning out prints at any reasonable scale depended on the fluid coordination between designers, platecutters, and publishers; roles that, by the sixteenth century, were considered distinguished enough to merit distinct credits engraved on the plates themselves: invenit, fecit/sculpsit, and excudit. While any one designer, plate cutter, and publisher could potentially exercise a great deal of influence over the production of a single print, their individual decisions (Whom to select as an engraver? What subjects to create for a print design? What market to sell to?) would have been variously constrained or encouraged by their position in this larger network (Who do they already know? And who, in turn, do their contacts know?) This dissertation addresses the impact of these constraints and affordances through the novel application of computational social network analysis to major databases of surviving prints from this period. This approach is used to evaluate several questions about trends in early modern print production practices that have not been satisfactorily addressed by traditional literature based on case studies alone: Did the social capital demanded by print production result in centralized, or distributed production of prints? When, and to what extent, did printmakers and publishers in the Low countries favor international versus domestic collaborators? And were printmakers under the same pressure as painters to specialize in particular artistic genres? This dissertation ultimately suggests how simple professional incentives endemic to the practice of printmaking may, at large scales, have resulted in quite complex patterns of collaboration and production. The framework of network analysis surfaces the role of certain printmakers who tend to be neglected in aesthetically-focused histories of art. This approach also highlights important issues concerning art historians’ balancing of individual influence versus the impact of longue durée trends. Finally, this dissertation also raises questions about the current limitations and future possibilities of combining computational methods with cultural heritage datasets in the pursuit of historical research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work represents ongoing efforts to study high-enthalpy carbon dioxide flows in anticipation of the upcoming Mars Science Laboratory (MSL) and future missions to the red planet. The work is motivated by observed anomalies between experimental and numerical studies in hypervelocity impulse facilities for high enthalpy carbon dioxide flows. In this work, experiments are conducted in the Hypervelocity Expansion Tube (HET) which, by virtue of its flow acceleration process, exhibits minimal freestream dissociation in comparison to reflected shock tunnels. This simplifies the comparison with computational result as freestream dissociation and considerable thermochemical excitation can be neglected. Shock shapes of the MSL aeroshell and spherical geometries are compared with numerical simulations incorporating detailed CO2 thermochemical modeling. The shock stand-off distance has been identified in the past as sensitive to the thermochemical state and as such, is used here as an experimental measurable for comparison with CFD and two different theoretical models. It is seen that models based upon binary scaling assumptions are not applicable for the low-density, small-scale conditions of the current work. Mars Science Laboratory shock shapes at zero angle of attack are also in good agreement with available data from the LENS X expansion tunnel facility, confi rming results are facility-independent for the same type of flow acceleration, and indicating that the flow velocity is a suitable first-order matching parameter for comparative testing. In an e ffort to address surface chemistry issues arising from high-enthalpy carbon dioxide ground-test based experiments, spherical stagnation point and aeroshell heat transfer distributions are also compared with simulation. Very good agreement between experiment and CFD is seen for all shock shapes and heat transfer distributions fall within the non-catalytic and super-catalytic solutions. We also examine spatial temperature profiles in the non-equilibrium relaxation region behind a stationary shock wave in a hypervelocity air Mach 7.42 freestream. The normal shock wave is established through a Mach reflection from an opposing wedge arrangement. Schlieren images confirm that the shock con guration is steady and the location is repeatable. Emission spectroscopy is used to identify dissociated species and to make vibrational temperature measurements using both the nitric oxide and the hydroxyl radical A-X band sequences. Temperature measurements are presented at selected locations behind the normal shock. LIFBASE is used as the simulation spectrum software for OH temperature-fitting, however the need to access higher vibrational and rotational levels for NO leads to the use of an in-house developed algorithm. For NO, results demonstrate the contribution of higher vibrational and rotational levels to the spectra at the conditions of this study. Very good agreement is achieved between the experimentally measured NO vibrational temperatures and calculations performed using an existing state-resolved, three-dimensional forced harmonic oscillator thermochemical model. The measured NO A-X vibrational temperatures are significantly higher than the OH A-X temperatures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The spike-diffuse-spike (SDS) model describes a passive dendritic tree with active dendritic spines. Spine-head dynamics is modeled with a simple integrate-and-fire process, whilst communication between spines is mediated by the cable equation. In this paper we develop a computational framework that allows the study of multiple spiking events in a network of such spines embedded on a simple one-dimensional cable. In the first instance this system is shown to support saltatory waves with the same qualitative features as those observed in a model with Hodgkin-Huxley kinetics in the spine-head. Moreover, there is excellent agreement with the analytically calculated speed for a solitary saltatory pulse. Upon driving the system with time varying external input we find that the distribution of spines can play a crucial role in determining spatio-temporal filtering properties. In particular, the SDS model in response to periodic pulse train shows a positive correlation between spine density and low-pass temporal filtering that is consistent with the experimental results of Rose and Fortune [1999, Mechanisms for generating temporal filters in the electrosensory system. The Journal of Experimental Biology 202, 1281-1289]. Further, we demonstrate the robustness of observed wave properties to natural sources of noise that arise both in the cable and the spine-head, and highlight the possibility of purely noise induced waves and coherent oscillations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Datacenters have emerged as the dominant form of computing infrastructure over the last two decades. The tremendous increase in the requirements of data analysis has led to a proportional increase in power consumption and datacenters are now one of the fastest growing electricity consumers in the United States. Another rising concern is the loss of throughput due to network congestion. Scheduling models that do not explicitly account for data placement may lead to a transfer of large amounts of data over the network causing unacceptable delays. In this dissertation, we study different scheduling models that are inspired by the dual objectives of minimizing energy costs and network congestion in a datacenter. As datacenters are equipped to handle peak workloads, the average server utilization in most datacenters is very low. As a result, one can achieve huge energy savings by selectively shutting down machines when demand is low. In this dissertation, we introduce the network-aware machine activation problem to find a schedule that simultaneously minimizes the number of machines necessary and the congestion incurred in the network. Our model significantly generalizes well-studied combinatorial optimization problems such as hard-capacitated hypergraph covering and is thus strongly NP-hard. As a result, we focus on finding good approximation algorithms. Data-parallel computation frameworks such as MapReduce have popularized the design of applications that require a large amount of communication between different machines. Efficient scheduling of these communication demands is essential to guarantee efficient execution of the different applications. In the second part of the thesis, we study the approximability of the co-flow scheduling problem that has been recently introduced to capture these application-level demands. Finally, we also study the question, "In what order should one process jobs?'' Often, precedence constraints specify a partial order over the set of jobs and the objective is to find suitable schedules that satisfy the partial order. However, in the presence of hard deadline constraints, it may be impossible to find a schedule that satisfies all precedence constraints. In this thesis we formalize different variants of job scheduling with soft precedence constraints and conduct the first systematic study of these problems.