922 resultados para large-eddy simulation
Resumo:
Globalization has increased the pressure on organizations and companies to operate in the most efficient and economic way. This tendency promotes that companies concentrate more and more on their core businesses, outsource less profitable departments and services to reduce costs. By contrast to earlier times, companies are highly specialized and have a low real net output ratio. For being able to provide the consumers with the right products, those companies have to collaborate with other suppliers and form large supply chains. An effect of large supply chains is the deficiency of high stocks and stockholding costs. This fact has lead to the rapid spread of Just-in-Time logistic concepts aimed minimizing stock by simultaneous high availability of products. Those concurring goals, minimizing stock by simultaneous high product availability, claim for high availability of the production systems in the way that an incoming order can immediately processed. Besides of design aspects and the quality of the production system, maintenance has a strong impact on production system availability. In the last decades, there has been many attempts to create maintenance models for availability optimization. Most of them concentrated on the availability aspect only without incorporating further aspects as logistics and profitability of the overall system. However, production system operator’s main intention is to optimize the profitability of the production system and not the availability of the production system. Thus, classic models, limited to represent and optimize maintenance strategies under the light of availability, fail. A novel approach, incorporating all financial impacting processes of and around a production system, is needed. The proposed model is subdivided into three parts, maintenance module, production module and connection module. This subdivision provides easy maintainability and simple extendability. Within those modules, all aspect of production process are modeled. Main part of the work lies in the extended maintenance and failure module that offers a representation of different maintenance strategies but also incorporates the effect of over-maintaining and failed maintenance (maintenance induced failures). Order release and seizing of the production system are modeled in the production part. Due to computational power limitation, it was not possible to run the simulation and the optimization with the fully developed production model. Thus, the production model was reduced to a black-box without higher degree of details.
Resumo:
Use of microarray technology often leads to high-dimensional and low- sample size data settings. Over the past several years, a variety of novel approaches have been proposed for variable selection in this context. However, only a small number of these have been adapted for time-to-event data where censoring is present. Among standard variable selection methods shown both to have good predictive accuracy and to be computationally efficient is the elastic net penalization approach. In this paper, adaptation of the elastic net approach is presented for variable selection both under the Cox proportional hazards model and under an accelerated failure time (AFT) model. Assessment of the two methods is conducted through simulation studies and through analysis of microarray data obtained from a set of patients with diffuse large B-cell lymphoma where time to survival is of interest. The approaches are shown to match or exceed the predictive performance of a Cox-based and an AFT-based variable selection method. The methods are moreover shown to be much more computationally efficient than their respective Cox- and AFT- based counterparts.
Resumo:
Simulation-based assessment is a popular and frequently necessary approach to evaluation of statistical procedures. Sometimes overlooked is the ability to take advantage of underlying mathematical relations and we focus on this aspect. We show how to take advantage of large-sample theory when conducting a simulation using the analysis of genomic data as a motivating example. The approach uses convergence results to provide an approximation to smaller-sample results, results that are available only by simulation. We consider evaluating and comparing a variety of ranking-based methods for identifying the most highly associated SNPs in a genome-wide association study, derive integral equation representations of the pre-posterior distribution of percentiles produced by three ranking methods, and provide examples comparing performance. These results are of interest in their own right and set the framework for a more extensive set of comparisons.
Resumo:
A Reynolds-Stress Turbulence Model has been incorporated with success into the KIVA code, a computational fluid dynamics hydrocode for three-dimensional simulation of fluid flow in engines. The newly implemented Reynolds-stress turbulence model greatly improves the robustness of KIVA, which in its original version has only eddy-viscosity turbulence models. Validation of the Reynolds-stress turbulence model is accomplished by conducting pipe-flow and channel-flow simulations, and comparing the computed results with experimental and direct numerical simulation data. Flows in engines of various geometry and operating conditions are calculated using the model, to study the complex flow fields as well as confirm the model’s validity. Results show that the Reynolds-stress turbulence model is able to resolve flow details such as swirl and recirculation bubbles. The model is proven to be an appropriate choice for engine simulations, with consistency and robustness, while requiring relatively low computational effort.
Resumo:
The objective of this research was to develop a high-fidelity dynamic model of a parafoilpayload system with respect to its application for the Ship Launched Aerial Delivery System (SLADS). SLADS is a concept in which cargo can be transfered from ship to shore using a parafoil-payload system. It is accomplished in two phases: An initial towing phase when the glider follows the towing vessel in a passive lift mode and an autonomous gliding phase when the system is guided to the desired point. While many previous researchers have analyzed the parafoil-payload system when it is released from another airborne vehicle, limited work has been done in the area of towing up the system from ground or sea. One of the main contributions of this research was the development of a nonlinear dynamic model of a towed parafoil-payload system. After performing an extensive literature review of the existing methods of modeling a parafoil-payload system, a five degree-of-freedom model was developed. The inertial and geometric properties of the system were investigated to predict accurate results in the simulation environment. Since extensive research has been done in determining the aerodynamic characteristics of a paraglider, an existing aerodynamic model was chosen to incorporate the effects of air flow around the flexible paraglider wing. During the towing phase, it is essential that the parafoil-payload system follow the line of the towing vessel path to prevent an unstable flight condition called ‘lockout’. A detailed study of the causes of lockout, its mathematical representation and the flight conditions and the parameters related to lockout, constitute another contribution of this work. A linearized model of the parafoil-payload system was developed and used to analyze the stability of the system about equilibrium conditions. The relationship between the control surface inputs and the stability was investigated. In addition to stability of flight, one more important objective of SLADS is to tow up the parafoil-payload system as fast as possible. The tension in the tow cable is directly proportional to the rate of ascent of the parafoil-payload system. Lockout instability is more favorable when tow tensions are large. Thus there is a tradeoff between susceptibility to lockout and rapid deployment. Control strategies were also developed for optimal tow up and to maintain stability in the event of disturbances.
Resumo:
Single-screw extrusion is one of the widely used processing methods in plastics industry, which was the third largest manufacturing industry in the United States in 2007 [5]. In order to optimize the single-screw extrusion process, tremendous efforts have been devoted for development of accurate models in the last fifty years, especially for polymer melting in screw extruders. This has led to a good qualitative understanding of the melting process; however, quantitative predictions of melting from various models often have a large error in comparison to the experimental data. Thus, even nowadays, process parameters and the geometry of the extruder channel for the single-screw extrusion are determined by trial and error. Since new polymers are developed frequently, finding the optimum parameters to extrude these polymers by trial and error is costly and time consuming. In order to reduce the time and experimental work required for optimizing the process parameters and the geometry of the extruder channel for a given polymer, the main goal of this research was to perform a coordinated experimental and numerical investigation of melting in screw extrusion. In this work, a full three-dimensional finite element simulation of the two-phase flow in the melting and metering zones of a single-screw extruder was performed by solving the conservation equations for mass, momentum, and energy. The only attempt for such a three-dimensional simulation of melting in screw extruder was more than twenty years back. However, that work had only a limited success because of the capability of computers and mathematical algorithms available at that time. The dramatic improvement of computational power and mathematical knowledge now make it possible to run full 3-D simulations of two-phase flow in single-screw extruders on a desktop PC. In order to verify the numerical predictions from the full 3-D simulations of two-phase flow in single-screw extruders, a detailed experimental study was performed. This experimental study included Maddock screw-freezing experiments, Screw Simulator experiments and material characterization experiments. Maddock screw-freezing experiments were performed in order to visualize the melting profile along the single-screw extruder channel with different screw geometry configurations. These melting profiles were compared with the simulation results. Screw Simulator experiments were performed to collect the shear stress and melting flux data for various polymers. Cone and plate viscometer experiments were performed to obtain the shear viscosity data which is needed in the simulations. An optimization code was developed to optimize two screw geometry parameters, namely, screw lead (pitch) and depth in the metering section of a single-screw extruder, such that the output rate of the extruder was maximized without exceeding the maximum temperature value specified at the exit of the extruder. This optimization code used a mesh partitioning technique in order to obtain the flow domain. The simulations in this flow domain was performed using the code developed to simulate the two-phase flow in single-screw extruders.
Resumo:
A phenomenological transition film evaporation model was introduced to a pore network model with the consideration of pore radius, contact angle, non-isothermal interface temperature, microscale fluid flows and heat and mass transfers. This was achieved by modeling the transition film region of the menisci in each pore throughout the porous transport layer of a half-cell polymer electrolyte membrane (PEM) fuel cell. The model presented in this research is compared with the standard diffusive fuel cell modeling approach to evaporation and shown to surpass the conventional modeling approach in terms of predicting the evaporation rates in porous media. The current diffusive evaporation models used in many fuel cell transport models assumes a constant evaporation rate across the entire liquid-air interface. The transition film model was implemented into the pore network model to address this issue and create a pore size dependency on the evaporation rates. This is accomplished by evaluating the transition film evaporation rates determined by the kinetic model for every pore containing liquid water in the porous transport layer (PTL). The comparison of a transition film and diffusive evaporation model shows an increase in predicted evaporation rates for smaller pore sizes with the transition film model. This is an important parameter when considering the micro-scaled pore sizes seen in the PTL and becomes even more substantial when considering transport in fuel cells containing an MPL, or a large variance in pore size. Experimentation was performed to validate the transition film model by monitoring evaporation rates from a non-zero contact angle water droplet on a heated substrate. The substrate was a glass plate with a hydrophobic coating to reduce wettability. The tests were performed at a constant substrate temperature and relative humidity. The transition film model was able to accurately predict the drop volume as time elapsed. By implementing the transition film model to a pore network model the evaporation rates present in the PTL can be more accurately modeled. This improves the ability of a pore network model to predict the distribution of liquid water and ultimately the level of flooding exhibited in a PTL for various operating conditions.
Resumo:
Im Beitrag wird ein neuartiges Förderprinzip zur federnden Aufnahme und zum Transport von massenhaft anfallenden Paketstrukturen vorgestellt. Das Förderprinzip beruht auf einem flächigen Tragmittel in Form eines veränderbaren, elastischen Verbundes von kleinskaligen Fördermodulen. Das konzipierte Transportprinzip mit peristaltischen Eigenschaften soll entstehende Staus der Pakete schnell auflösen und eine dedizierte Steuerung von Teilmengen zulassen, um den erforderlichen Durchsatz innerhalb eines Materialflusssystems zu erreichen. Diese Lösung ermöglicht eine sinnvolle Verknüpfung von Wirkprinzipien der Schüttgut- und Stückgutförderung zur Aufnahme und Fortbewegung von Pakete als Schüttgut. Die Grundfunktionalität des Förderkonzepts wird durch die numerische Simulation auf Basis der Diskrete Elemente Methode sowie der Mehrkörpersimulation überprüft.
Resumo:
The reconstruction of past flash floods in ungauged basins leads to a high level of uncertainty, which increases if other processes are involved such as the transport of large wood material. An important flash flood occurred in 1997 in Venero Claro (Central Spain), causing significant economic losses. The wood material clogged bridge sections, raising the water level upstream. The aim of this study was to reconstruct this event, analysing the influence of woody debris transport on the flood hazard pattern. Because the reach in question was affected by backwater effects due to bridge clogging, using only high water mark or palaeostage indicators may overestimate discharges, and so other methods are required to estimate peak flows. Therefore, the peak discharge was estimated (123 ± 18 m3 s–1) using indirect methods, but one-dimensional hydraulic simulation was also used to validate these indirect estimates through an iterative process (127 ± 33 m3 s–1) and reconstruct the bridge obstruction to obtain the blockage ratio during the 1997 event (~48%) and the bridge clogging curves. Rainfall–Runoff modelling with stochastic simulation of different rainfall field configurations also helped to confirm that a peak discharge greater than 150 m3 s–1 is very unlikely to occur and that the estimated discharge range is consistent with the estimated rainfall amount (233 ± 27 mm). It was observed that the backwater effect due to the obstruction (water level ~7 m) made the 1997 flood (~35-year return period) equivalent to the 50-year flood. This allowed the equivalent return period to be defined as the recurrence interval of an event of specified magnitude, which, where large woody debris is present, is equivalent in water depth and extent of flooded area to a more extreme event of greater magnitude. These results highlight the need to include obstruction phenomena in flood hazard analysis.
Resumo:
Cloud Computing is an enabler for delivering large-scale, distributed enterprise applications with strict requirements in terms of performance. It is often the case that such applications have complex scaling and Service Level Agreement (SLA) management requirements. In this paper we present a simulation approach for validating and comparing SLA-aware scaling policies using the CloudSim simulator, using data from an actual Distributed Enterprise Information System (dEIS). We extend CloudSim with concurrent and multi-tenant task simulation capabilities. We then show how different scaling policies can be used for simulating multiple dEIS applications. We present multiple experiments depicting the impact of VM scaling on both datacenter energy consumption and dEIS performance indicators.
Resumo:
In modern medico-legal literature, only a small number of publications deal with fatal injuries from black powder guns. Most of them focus on the morphological features such as intense soot soiling, blast tattooing and burn effects in close-range shots or describe the wound ballistics of spherical lead bullets. Another kind of "unusual" and potentially lethal weapons are handguns destined for firing only blank cartridges such as starter and alarm pistols. The dangerousness of these guns is restricted to very close and contact range shots and results from the gas jet produced by the deflagration of the propellant. The present paper reports on a suicide committed with a muzzle-loading percussion pistol cal. 45. An unusually large stellate entrance wound was located in the precordial region, accompanied by an imprint mark from the ramrod and a faint greenish discoloration (apparently due to the formation of sulfhemoglobin). Autopsy revealed an oversized powder cavity, multiple fractures of the anterior thoracic wall as well as ruptures of the heart, the aorta, the left hepatic lobe and the diaphragm. In total, the zone of mechanical destruction had a diameter of approx. 15 cm. As there was no exit wound and no bullet lodged in the body, the injury was caused exclusively by the inrushing combustion gases of the propellant (black powder) comparable with the gas jet of a blank cartridge gun. In contact shots to ballistic gelatine using the suicide's pistol loaded with black powder but no projectile, the formation of a nearly spherical cavity could be demonstrated by means of a high-speed camera. The extent of the temporary cavity after firing with 5 g of black powder roughly corresponded to the zone of destruction found in the suicide's body.
Resumo:
Cloud Computing has evolved to become an enabler for delivering access to large scale distributed applications running on managed network-connected computing systems. This makes possible hosting Distributed Enterprise Information Systems (dEISs) in cloud environments, while enforcing strict performance and quality of service requirements, defined using Service Level Agreements (SLAs). {SLAs} define the performance boundaries of distributed applications, and are enforced by a cloud management system (CMS) dynamically allocating the available computing resources to the cloud services. We present two novel VM-scaling algorithms focused on dEIS systems, which optimally detect most appropriate scaling conditions using performance-models of distributed applications derived from constant-workload benchmarks, together with SLA-specified performance constraints. We simulate the VM-scaling algorithms in a cloud simulator and compare against trace-based performance models of dEISs. We compare a total of three SLA-based VM-scaling algorithms (one using prediction mechanisms) based on a real-world application scenario involving a large variable number of users. Our results show that it is beneficial to use autoregressive predictive SLA-driven scaling algorithms in cloud management systems for guaranteeing performance invariants of distributed cloud applications, as opposed to using only reactive SLA-based VM-scaling algorithms.
Resumo:
Gaussian random field (GRF) conditional simulation is a key ingredient in many spatial statistics problems for computing Monte-Carlo estimators and quantifying uncertainties on non-linear functionals of GRFs conditional on data. Conditional simulations are known to often be computer intensive, especially when appealing to matrix decomposition approaches with a large number of simulation points. This work studies settings where conditioning observations are assimilated batch sequentially, with one point or a batch of points at each stage. Assuming that conditional simulations have been performed at a previous stage, the goal is to take advantage of already available sample paths and by-products to produce updated conditional simulations at mini- mal cost. Explicit formulae are provided, which allow updating an ensemble of sample paths conditioned on n ≥ 0 observations to an ensemble conditioned on n + q observations, for arbitrary q ≥ 1. Compared to direct approaches, the proposed formulae proveto substantially reduce computational complexity. Moreover, these formulae explicitly exhibit how the q new observations are updating the old sample paths. Detailed complexity calculations highlighting the benefits of this approach with respect to state-of-the-art algorithms are provided and are complemented by numerical experiments.
Resumo:
Three extended families live around a lake. One family are rice farmers, the second family are vegetable farmers, and the third are a family of livestock herders. All of them depend on the use of lake water for their production, and all of them need large quantities of water. All are dependent on the use of the lake water to secure their livelihood. In the game, the families are represented by their councils of elders. Each of the councils has to find means and ways to increase production in order to keep up with the growth of its family and their demands. This puts more and more pressure on the water resources, increasing the risk of overuse. Conflicts over water are about to emerge between the families. Each council of elders must try to pursue its families interests, while at the same time preventing excessive pressure on the water resources. Once a council of elders is no longer able to meet the needs of its family, it is excluded from the game. Will the parties cooperate or compete? To face the challenge of balancing economic well-being, sustainable resource management, and individual and collective interests, the three parties have a set of options for action at hand. These include power play to safeguard their own interests, communication and cooperation to negotiate with neighbours, and searching for alternatives to reduce pressure on existing water resources. During the game the players can experience how tensions may arise, increase and finally escalate. They realise what impact power play has and how alliances form, and the importance of trust-building measures, consensus and cooperation. From the insights gained, important conflict prevention and mitigation measures are derived in a debriefing session. The game is facilitated by a moderator, and lasts for 3-4 hours. Aim of the game: Each family pursues the objective of serving its own interests and securing its position through appropriate strategies and skilful negotiation, while at the same time optimising use of the water resources in a way that prevents their degradation. The end of the game is open. While the game may end by one or two families dropping out because they can no longer secure their subsistence, it is also possible that the three families succeed in creating a situation that allows them to meet their own needs as well as the requirements for sustainable water use in the long term. Learning objectives The game demonstrates how tension builds up, increases, and finally escalates; it shows how power positions work and alliances are formed; and it enables the players to experience the great significance of mutual agreement and cooperation. During the game and particularly during the debriefing and evaluation session it is important to link experiences made during the game to the players’ real-life experiences, and to discuss these links in the group. The resulting insights will provide a basis for deducing important conflict prevention and transformation measures.
Resumo:
Today, there is little knowledge on the attitude state of decommissioned intact objects in Earth orbit. Observational means have advanced in the past years, but are still limited with respect to an accurate estimate of motion vector orientations and magnitude. Especially for the preparation of Active Debris Removal (ADR) missions as planned by ESA’s Clean Space initiative or contingency scenarios for ESA spacecraft like ENVISAT, such knowledge is needed. ESA's “Debris Attitude Motion Measurements and Modelling” project (ESA Contract No. 40000112447), led by the Astronomical Institute of the University of Bern (AIUB), addresses this problem. The goal of the project is to achieve a good understanding of the attitude evolution and the considerable internal and external effects which occur. To characterize the attitude state of selected targets in LEO and GTO, multiple observation methods are combined. Optical observations are carried out by AIUB, Satellite Laser Ranging (SLR) is performed by the Space Research Institute of the Austrian Academy of Sciences (IWF) and radar measurements and signal level determination are provided by the Fraunhofer Institute for High Frequency Physics and Radar Techniques (FHR). The In-Orbit Tumbling Analysis tool (ιOTA) is a prototype software, currently in development by Hyperschall Technologie Göttingen GmbH (HTG) within the framework of the project. ιOTA will be a highly modular software tool to perform short-(days), medium-(months) and long-term (years) propagation of the orbit and attitude motion (six degrees-of-freedom) of spacecraft in Earth orbit. The simulation takes into account all relevant acting forces and torques, including aerodynamic drag, solar radiation pressure, gravitational influences of Earth, Sun and Moon, eddy current damping, impulse and momentum transfer from space debris or micro meteoroid impact, as well as the optional definition of particular spacecraft specific influences like tank sloshing, reaction wheel behaviour, magnetic torquer activity and thruster firing. The purpose of ιOTA is to provide high accuracy short-term simulations to support observers and potential ADR missions, as well as medium-and long-term simulations to study the significance of the particular internal and external influences on the attitude, especially damping factors and momentum transfer. The simulation will also enable the investigation of the altitude dependency of the particular external influences. ιOTA's post-processing modules will generate synthetic measurements for observers and for software validation. The validation of the software will be done by cross-calibration with observations and measurements acquired by the project partners.