980 resultados para Dynamic load priority
Resumo:
The 6 cylinder servo-hydraulic loading system of CEDEX's track box (250 kN, 50 Hz) has been recently implemented with a new piezoelectric loading system (±20 kN, 300 Hz) allowing the incorporation of low amplitude high frequency dynamic load time histories to the high amplitude low frequency quasi-static load time histories used so far in the CEDEX's track box to assess the inelastic long term behavior of ballast under mixed traffic in conventional and high- speed lines. This presentation will discuss the results obtained in the first long-duration test performed at CEDEX's track box using simultaneously both loading systems, to simulate the pass-by of 6000 freight vehicles (1M of 225 kN axle loads) travelling at a speed of 120 km/h over a line with vertical irregularities corresponding to a medium quality lin3e level. The superstructure of the track tested at full scale consisted of E 60 rails, stiff rail pads (mayor que 450 kN/mm), B90.2 sleepers with USP 0.10 N/mm and a 0.35 m thick ballast layer of ADIF first class. A shear wave velocity of 250 m/s can be assumed for the different layers of the track sub-base. The ballast long-term settlements will be compared with those obtained in a previous long-duration quasi- static test performed in the same track, for the RIVAS [EU co-funded] project, in which no dynamic loads where considered. Also, the results provided by a high diameter cyclic triaxial cell with ballast tested in full size will be commented. Finally, the progress made at CEDEX's Geotechnical Laboratory to reproduce numerically the long term behavior of ballast will be discussed.
Resumo:
With the advent of distributed computer systems with a largely transparent user interface, new questions have arisen regarding the management of such an environment by an operating system. One fertile area of research is that of load balancing, which attempts to improve system performance by redistributing the workload submitted to the system by the users. Early work in this field concentrated on static placement of computational objects to improve performance, given prior knowledge of process behaviour. More recently this has evolved into studying dynamic load balancing with process migration, thus allowing the system to adapt to varying loads. In this thesis, we describe a simulated system which facilitates experimentation with various load balancing algorithms. The system runs under UNIX and provides functions for user processes to communicate through software ports; processes reside on simulated homogeneous processors, connected by a user-specified topology, and a mechanism is included to allow migration of a process from one processor to another. We present the results of a study of adaptive load balancing algorithms, conducted using the aforementioned simulated system, under varying conditions; these results show the relative merits of different approaches to the load balancing problem, and we analyse the trade-offs between them. Following from this study, we present further novel modifications to suggested algorithms, and show their effects on system performance.
Resumo:
The computer systems of today are characterised by data and program control that are distributed functionally and geographically across a network. A major issue of concern in this environment is the operating system activity of resource management for different processors in the network. To ensure equity in load distribution and improved system performance, load balancing is often undertaken. The research conducted in this field so far, has been primarily concerned with a small set of algorithms operating on tightly-coupled distributed systems. More recent studies have investigated the performance of such algorithms in loosely-coupled architectures but using a small set of processors. This thesis describes a simulation model developed to study the behaviour and general performance characteristics of a range of dynamic load balancing algorithms. Further, the scalability of these algorithms are discussed and a range of regionalised load balancing algorithms developed. In particular, we examine the impact of network diameter and delay on the performance of such algorithms across a range of system workloads. The results produced seem to suggest that the performance of simple dynamic policies are scalable but lack the load stability of more complex global average algorithms.
Resumo:
This paper presents the process of load balancing in simulation system Triad.Net, the architecture of load balancing subsystem. The main features of static and dynamic load balancing are discussed and new approach, controlled dynamic load balancing, needed for regular mapping of simulation model on the network of computers is proposed. The paper considers linguistic constructions of Triad language for different load balancing algorithms description.
Resumo:
Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.
Resumo:
In many areas of simulation, a crucial component for efficient numerical computations is the use of solution-driven adaptive features: locally adapted meshing or re-meshing; dynamically changing computational tasks. The full advantages of high performance computing (HPC) technology will thus only be able to be exploited when efficient parallel adaptive solvers can be realised. The resulting requirement for HPC software is for dynamic load balancing, which for many mesh-based applications means dynamic mesh re-partitioning. The DRAMA project has been initiated to address this issue, with a particular focus being the requirements of industrial Finite Element codes, but codes using Finite Volume formulations will also be able to make use of the project results.
Resumo:
A method is outlined for optimising graph partitions which arise in mapping unstructured mesh calculations to parallel computers. The method employs a relative gain iterative technique to both evenly balance the workload and minimise the number and volume of interprocessor communications. A parallel graph reduction technique is also briefly described and can be used to give a global perspective to the optimisation. The algorithms work efficiently in parallel as well as sequentially and when combined with a fast direct partitioning technique (such as the Greedy algorithm) to give an initial partition, the resulting two-stage process proves itself to be both a powerful and flexible solution to the static graph-partitioning problem. Experiments indicate that the resulting parallel code can provide high quality partitions, independent of the initial partition, within a few seconds. The algorithms can also be used for dynamic load-balancing, reusing existing partitions and in this case the procedures are much faster than static techniques, provide partitions of similar or higher quality and, in comparison, involve the migration of a fraction of the data.
Resumo:
Power system engineers face a double challenge: to operate electric power systems within narrow stability and security margins, and to maintain high reliability. There is an acute need to better understand the dynamic nature of power systems in order to be prepared for critical situations as they arise. Innovative measurement tools, such as phasor measurement units, can capture not only the slow variation of the voltages and currents but also the underlying oscillations in a power system. Such dynamic data accessibility provides us a strong motivation and a useful tool to explore dynamic-data driven applications in power systems. To fulfill this goal, this dissertation focuses on the following three areas: Developing accurate dynamic load models and updating variable parameters based on the measurement data, applying advanced nonlinear filtering concepts and technologies to real-time identification of power system models, and addressing computational issues by implementing the balanced truncation method. By obtaining more realistic system models, together with timely updated parameters and stochastic influence consideration, we can have an accurate portrait of the ongoing phenomena in an electrical power system. Hence we can further improve state estimation, stability analysis and real-time operation.
Resumo:
Este trabalho aborda uma série de conceitos base no que concerne à ação do vento sobre edifícios altos, começando por ser estabelecidas algumas considerações fundamentais acerca da circulação do vento na camada limite atmosférica bem como acerca da sua interação com as estruturas. É feita uma análise da metodologia proposta pelo Eurocódigo 1 para quantificação de tal ação sobre os edifícios, bem como é elaborada uma comparação da metodologia proposta por este com a metodologia ainda vigente na regulamentação portuguesa. Foram modelados computacionalmente, com recurso a um programa de cálculo estrutural automático, três edifícios altos com diferente secção geométrica em planta que servirão de caso de estudo. Para estes mesmos edifícios são aplicados os dois regulamentos considerados com vista à determinação de esforços e deslocamentos. Sendo os edifícios altos um género de estruturas capazes de ser excitadas dinamicamente perante a ação do vento, adota-se uma metodologia para quantificação desta ação de forma dinâmica na direção do escoamento. Assim, é obtida a resposta dinâmica ao longo do tempo em termos de deslocamentos e acelerações para o caso de estudo considerado e é feita uma comparação da resposta do edifício quadrangular sob a ação dinâmica do vento com a resposta estática regulamentar.
Resumo:
As a result of the collapse of a 140 foot high-mast lighting tower in Sioux City, Iowa in November of 2003, a thorough investigation into the behavior and design of these tall, yet relatively flexible structures was undertaken. Extensive work regarding the root cause of this failure was carried out by Robert Dexter of The University of Minnesota. Furthermore, a statewide inspection of all the high-mast towers in Iowa revealed fatigue cracks and loose anchor bolts on other existing structures. The current study was proposed to examine the static and dynamic behavior of a variety of towers in the State of Iowa utilizing field testing, specifically long-term monitoring and load testing. This report presents the results and conclusions from this project. The field work for this project was divided into two phases. Phase 1 of the project was conducted in October 2004 and focused on the dynamic properties of ten different towers in Clear Lake, Ames, and Des Moines, Iowa. Of those ten, two were also instrumented to obtain stress distributions at various details and were included in a 12 month long-term monitoring study. Phase 2 of this investigation was conducted in May of 2005, in Sioux City, Iowa, and focused on determining the static and dynamic behavior of a tower similar to the one that collapsed in November 2003. Identical tests were performed on a similar tower which was retrofitted with a more substantial replacement bottom section in order to assess the effect of the retrofit. A third tower with different details was dynamically load tested to determine its dynamic characteristics, similar to the Phase 1 testing. Based on the dynamic load tests, the modal frequencies of the towers fall within the same range. Also, the damping ratios are significantly lower in the higher modes than the values suggested in the AASHTO and CAN/CSA specifications. The comparatively higher damping ratios in the first mode may be due to aerodynamic damping. These low damping ratios in combination with poor fatigue details contribute to the accumulation of a large number of damage-causing cycles. As predicted, the stresses in the original Sioux City tower are much greater than the stresses in the retrofitted towers at Sioux City. Additionally, it was found that poor installation practices which often lead to loose anchor bolts and out-of-level leveling nuts can cause high localized stresses in the towers, which can accelerate fatigue damage.
Resumo:
Joint Publications from Iowa Engineering Experiment Station - Bulletin No. 188 and Iowa Highway Research Board - Bulletin No. 17. In the design of highway bridges, the 'static live load is multiplied by a factor to compensate for the dynamic effect of moving vehicles. This factor, commonly referred to as an impact factor, is intended to provide for the dynamic response of the bridge to moving loads and suddenly applied forces. Many investigators have published research which contradicts the current impact formula 1,4,17. Some investigators feel that the problem of impact deals not only with the increase in over-all static live load but that it is an integral part of a dynamic load distribution problem. The current expanded highway program with the large number of bridge structures required emphasizes the need for investigating some of the dynamic behavior problems which have been generally ignored by highway engineers. These problems generally result from the inability of a designer to predict the dynamic response of a bridge structure. Many different investigations have been made of particular portions of the overall dynamic problem. The results of these varied investigations are inevitably followed by a number of unanswered questions. Ironically, many of the unanswered questions are those which are of immediate concern in the design of highway bridges, and this emphasizes the need for additional research on the problem of impact.
Resumo:
AIMS: There is no standard test to determine the fatigue resistance of denture teeth. With the increasing number of patients with implant-retained dentures the mechanical strength of the denture teeth requires more attention and valid laboratory test set-ups. The purpose of the present study was to determine the fatigue resistance of various denture teeth using a dynamic load testing machine. METHODS: Four denture teeth were used: Bonartic II (Candulor), Physiodens (Vita), SR Phonares II (Ivoclar Vivadent) and Trubyte (Dentsply). For dynamic load testing, first upper molars with a similar shape and cusp inclination were selected. The molar teeth were embedded in cylindrical steel molds with denture base material (ProBase, Ivoclar Vivadent). Dynamic fatigue loading was carried out on the mesio-buccal cusp at a 45° angle using dynamic testing machines and 2,000,000 cycles at 2Hz in water (37°C). Three specimens per group and load were submitted to decreasing load levels (at least 4) until all the three specimens no longer showed any failures. All the specimens were evaluated under a stereo microscope (20× magnification). The number of cycles reached before observing a failure, and its dependence on the load and on the material, has been modeled using a parametric survival regression model with a lognormal distribution. This allowed to estimate the fatigue resistance for a given material as the maximal load for which one would observe less than 1% failure after 2,000,000 cycles. RESULTS: The failure pattern was similar for all denture teeth, showing a large chipping of the loaded mesio-buccal cusp. In our regression model, there were statistically significant differences among the different materials, with SR Phonares II and Bonartic II showing a higher resistance than Physiodens and Trubyte, the fatigue resistance being estimated at around 110N for the former two, and at about 60N for the latter two materials. CONCLUSION: The fatigue resistance may be a useful parameter to assess and to compare the clinical risk of chipping and fracture of denture tooth materials.
Resumo:
We present a general Multi-Agent System framework for distributed data mining based on a Peer-to-Peer model. Agent protocols are implemented through message-based asynchronous communication. The framework adopts a dynamic load balancing policy that is particularly suitable for irregular search algorithms. A modular design allows a separation of the general-purpose system protocols and software components from the specific data mining algorithm. The experimental evaluation has been carried out on a parallel frequent subgraph mining algorithm, which has shown good scalability performances.
Resumo:
The determination of mean intensity of parasitism for colony-forming sessile protozoan such as Epistylis has been a great problem in parasitological studies. Some alternatives have been proposed by researchers for laboratory and field conditions. This study describes the criteria to establish the parasitic intensity score for epistylidid infestation in fish. Parasite distribution and the host-parasite relationship in four species of Brazilian cultured catfish and their hybrids are discussed. The highest prevalence rates were found in the hybrid jundiara, Leiarius marmoratus male × Pseudoplatystoma reticulatum female (96.4 %), followed by jurupoca, Hemisorubim platyrhynchos (60 %), and the hybrid surubim, Pseudoplatystoma corruscans male × P. reticulatum female (52.7 %). Positive correlation between parasitic intensity score and the fish size, weight, and relative condition factor were also observed. These findings indicate that Epistylis infestation in Brazilian catfish is an emerging disease in cultured fish. © 2012 Springer-Verlag.
Resumo:
La tesi tratta di strumenti finalizzati alla valutazione dello stato conservativo e di supporto all'attività di manutenzione dei ponti, dai più generali Bridge Management Systems ai Sistemi di Valutazione Numerica della Condizione strutturale. Viene proposto uno strumento originale con cui classificare i ponti attraverso un Indice di Valutazione Complessiva e grazie ad esso stabilire le priorità d'intervento. Si tara lo strumento sul caso pratico di alcuni ponti della Provincia di Bologna. Su un ponte in particolare viene realizzato un approfondimento specifico sulla determinazione approssimata dei periodi propri delle strutture da ponte. Si effettua un confronto dei risultati di alcune modellazioni semplificate in riferimento a modellazioni dettagliate e risultati sperimentali.