992 resultados para Overhead bytes


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Kiristyville markkinoille on aina tuotava jotain uutta tarjottavaa ja SDH-kortti on yksi sellainen tuote jonka Nokia Networks julkistaa osaksi uutta Nokia Flexihub Nodea. Tavoitteena on suunnitella VC-12kanavoitu STM-1 kortti yhdistämään ylemmän tason tietoliikenneverkko suuren kapasiteetin radioon. Ennen kuin markkinoilla on valmis tuote, on sen takana valtaisa määrä työtä. Erilaisia dokumentteja on pitänyt tuottaa ja sopimuksia tehdä. Esimerkiksi vaatimusmäärittelyt on oltava selvät, jotta tiedetään mitä tuotteelta halutaan. Tätä ennen on kuitenkin pitänyt ymmärtää miten SDHtoimii ja miten otsikkotavuja käsitellään. Myös erilaiset piirivalinnat aiheuttavat miettimistä, sillä markkinoilla on runsaasti valmiita piirejä SDH signaalinkäsittelyyn. Varma tiedonsiirto on tärkeää puhelinoperaattorille ja siksi joudutaan miettimään varmennuksia ja niiden toteuttamista. Myös synkronointi on tärkeä osa SDH järjestelmää ja sen toteuttaminen hyvin on tärkeää. Hälytykset on otettava huomioon ja mietittävä, miten niiden käsittely saadaan hoidettua järkevästi, ilman että mikään järjestelmän osa ruuhkautuu kohtuuttomasti. Tässä Diplomityössä on tutustuttu SDH-järjestelmään, otsikkotavujen käsittelyyn ja vaatimusmäärittelyihin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic optimization has several key advantages. This includes the ability to work on binary code in the absence of sources and to perform optimization across module boundaries. However, it has a significant disadvantage viz-a-viz traditional static optimization: it has a significant runtime overhead. There can be performance gain only if the overhead can be amortized. In this paper, we will quantitatively analyze the runtime overhead introduced by a dynamic optimizer, DynamoRIO. We found that the major overhead does not come from the optimizer's operation. Instead, it comes from the extra code in the code cache added by DynamoRIO. After a detailed analysis, we will propose a method of trace construction that ameliorate the overhead introduced by the dynamic optimizer, thereby reducing the runtime overhead of DynamoRIO. We believe that the result of the study as well as the proposed solution is applicable to other scenarios such as dynamic code translation and managed execution that utilizes a framework similar to that of dynamic optimization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The performance optimisation of overhead conductors depends on the systematic investigation of the fretting fatigue mechanisms in the conductor/clamping system. As a consequence, a fretting fatigue rig was designed and a limited range of fatigue tests was carried out at the middle high cycle fatigue regime in order to access an exploratory S-N curve for a Grosbeak conductor, which was mounted on a mono-articulated aluminium clamping system. Subsequent to these preliminary fatigue tests, the components of the conductor/clamping system, such as ACSR conductor, upper and lower clamps, bolt and nuts, were subjected to a failure analysis procedure in order to investigate the metallurgical free variables interfering on the fatigue test results, aiming at the optimisation of the testing reproducibility. The results indicated that the rupture of the planar fracture surfaces observed in the external At strands of the conductor tested under lower bending amplitude (0.9 mm) occurred by fatigue cracking (I mm deep), followed by shear overload. The V-type fracture surfaces observed in some At strands of the conductor tested under higher bending amplitude (1.3 mm) were also produced by fatigue cracking (approximately 400 mu m deep), followed by shear overload. Shear overload fracture (45 degrees fracture surface) was also observed on the remaining At wires of the conductor tested under higher bending amplitude (1.3 mm). Additionally, the upper and lower Al-cast clamps presented microstructure-sensitive cracking, which was folowed by particle detachment and formation of abrasive debris on the clamp/conductor tribo-interface, promoting even further the fretting mechanism. The detrimental formation of abrasive debris might be inhibited by the selection of a more suitable class of as-cast At alloy for the production of clamps. Finally, the bolt/nut system showed intense degradation of the carbon steel nut (fabricated in ferritic-pearlitic carbon steel, featuring machined threads with 190 HV), with intense plastic deformation and loss of material. Proper selection of both the bolt and nut materials and the finishing processing might prevent the loss in the clamping pressure during the fretting testing. It is important to control the specification of these components (clamps, bolt and nuts) prior to the start of large scale fretting fatigue testing of the overhead conductors in order to increase the reproducibility of this assessment. (c) 2008 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lightning-induced overvoltages have a considerable impact on the power quality of overhead distribution and telecommunications systems, and various models have been developed for the computation of the electromagnetic transients caused by indirect strokes. The most adequate has been shown to be the one proposed by Agrawal et al.; the Rusck model can be visualized as a particular case, as both models are equivalent when the lightning channel is perpendicular to the ground plane. In this paper, an extension of the Rusck model that enables the calculation of lightning-induced transients considering flashes to nearby elevated structures and realistic line configurations is tested against data obtained from both natural lightning and scale model experiments. The latter, performed under controlled conditions, can be used also to verify the validity of other coupling models and relevant codes. The so-called Extended Rusck Model, which is shown to be sufficiently accurate, is applied to the analysis of lightning-induced voltages on lines with a shield wire and/or surge arresters. The investigation conducted indicates that the ratio between the peak values of the voltages induced by typical first and subsequent strokes can be either greater or smaller than the unity, depending on the line configuration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective To study student and staff views of the role and use of handouts, note-taking and overhead transparencies in veterinary science lectures at the University of Queensland Methods The Nominal Group Technique was used to help develop a questionnaire, which was completed by 351 students (a response rate of 84%) and 35 staff (76%) from the 5 years of the veterinary course. The data were analysed using the SAS statistical computer package. Results Staff and students held different views as to the frequency with which handouts should be used, their educational value, and whether they should be complete or partial. Fewer students than staff agreed that handouts discourage further reading in a subject. Almost all staff and students saw the central functions of note-taking to be provision of notes for subsequent revision and encoding information given by the lecturer. More students than staff however, considered that note-taking in lectures interferes with understanding. Staff and students held similar views as to the uses of overheads in lectures. Interestingly however, more staff than students agreed that overheads often contain too much information. Conclusion Both students and staff saw the central role of note-taking as providing a set of good notes for revision. Generally students preferred that this information be provided in the form of partial or complete handouts, while staff preferred students to take notes and to read outside lectures. Surprisingly, more staff than students felt that overhead transparencies often contained too much information. Note-taking, handouts and overhead transparencies need to be linked in a coherent educational strategy to promote effective learning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hard real- time multiprocessor scheduling has seen, in recent years, the flourishing of semi-partitioned scheduling algorithms. This category of scheduling schemes combines elements of partitioned and global scheduling for the purposes of achieving efficient utilization of the system’s processing resources with strong schedulability guarantees and with low dispatching overheads. The sub-class of slot-based “task-splitting” scheduling algorithms, in particular, offers very good trade-offs between schedulability guarantees (in the form of high utilization bounds) and the number of preemptions/migrations involved. However, so far there did not exist unified scheduling theory for such algorithms; each one was formulated in its own accompanying analysis. This article changes this fragmented landscape by formulating a more unified schedulability theory covering the two state-of-the-art slot-based semi-partitioned algorithms, S-EKG and NPS-F (both fixed job-priority based). This new theory is based on exact schedulability tests, thus also overcoming many sources of pessimism in existing analysis. In turn, since schedulability testing guides the task assignment under the schemes in consideration, we also formulate an improved task assignment procedure. As the other main contribution of this article, and as a response to the fact that many unrealistic assumptions, present in the original theory, tend to undermine the theoretical potential of such scheduling schemes, we identified and modelled into the new analysis all overheads incurred by the algorithms in consideration. The outcome is a new overhead-aware schedulability analysis that permits increased efficiency and reliability. The merits of this new theory are evaluated by an extensive set of experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Presented at IEEE 21st International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA 2015). 19 to 21, Aug, 2015.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

High Performance Computing is a rapidly evolving area of computer science which attends to solve complicated computational problems with the combination of computational nodes connected through high speed networks. This work concentrates on the networks problems that appear in such networks and specially focuses on the Deadlock problem that can decrease the efficiency of the communication or even destroy the balance and paralyze the network. Goal of this work is the Deadlock avoidance with the use of virtual channels, in the switches of the network where the problem appears. The deadlock avoidance assures that will not be loss of data inside network, having as result the increased latency of the served packets, due to the extra calculation that the switches have to make to apply the policy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Performance analysis is the task of monitor the behavior of a program execution. The main goal is to find out the possible adjustments that might be done in order improve the performance. To be able to get that improvement it is necessary to find the different causes of overhead. Nowadays we are already in the multicore era, but there is a gap between the level of development of the two main divisions of multicore technology (hardware and software). When we talk about multicore we are also speaking of shared memory systems, on this master thesis we talk about the issues involved on the performance analysis and tuning of applications running specifically in a shared Memory system. We move one step ahead to take the performance analysis to another level by analyzing the applications structure and patterns. We also present some tools specifically addressed to the performance analysis of OpenMP multithread application. At the end we present the results of some experiments performed with a set of OpenMP scientific application.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computer chips implementation technologies evolving to obtain more performance are increasing the probability of transient faults. As this probability grows and on-chip solutions are expensive or tend to degrade processor performance, the efforts to deal with these transient faults in higher levels (such as the operating system or even at the application level) are increasing. Mostly, these efforts are trying to avoid silent data corruptions using hardware, software and hybrid based techniques to add redundancy to detect the errors generated by the transient faults. This work presents our proposal to improve the robustness of applications with source code based transformation adding redundancy. Also, our proposal takes account of the tradeoff between the improved robustness and the overhead generated by the added redundancy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The optimization of the pilot overhead in single-user wireless fading channels is investigated, and the dependence of this overhead on various system parameters of interest (e.g., fading rate, signal-to-noise ratio) is quantified. The achievable pilot-based spectral efficiency is expanded with respect to the fading rate about the no-fading point, which leads to an accurate order expansion for the pilot overhead. This expansion identifies that the pilot overhead, as well as the spectral efficiency penalty with respect to a reference system with genie-aided CSI (channel state information) at the receiver, depend on the square root of the normalized Doppler frequency. It is also shown that the widely-used block fading model is a special case of more accurate continuous fading models in terms of the achievable pilot-based spectral efficiency. Furthermore, it is established that the overhead optimization for multiantenna systems is effectively the same as for single-antenna systems with the normalized Doppler frequency multiplied by the number of transmit antennas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The optimization of the pilot overhead in wireless fading channels is investigated, and the dependence of this overhead on various system parameters of interest (e.g., fading rate, signal-to-noise ratio) is quantified. The achievable pilot-based spectral efficiency is expanded with respect to the fading rate about the no-fading point, which leads to an accurate order expansion for the pilot overhead. This expansion identifies that the pilot overhead, as well as the spectral efficiency penalty with respect to a reference system with genie-aided CSI (channel state information) at the receiver, depend on the square root of the normalized Doppler frequency. It is also shown that the widely-usedblock fading model is a special case of more accurate continuous fading models in terms of the achievable pilot-based spectral efficiency. Furthermore, it is established that the overhead optimization for multiantenna systems is effectively the same as for single-antenna systems with thenormalized Doppler frequency multiplied by the number of transmit antennas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Actualment un típic embedded system (ex. telèfon mòbil) requereix alta qualitat per portar a terme tasques com codificar/descodificar a temps real; han de consumir poc energia per funcionar hores o dies utilitzant bateries lleugeres; han de ser el suficientment flexibles per integrar múltiples aplicacions i estàndards en un sol aparell; han de ser dissenyats i verificats en un període de temps curt tot i l’augment de la complexitat. Els dissenyadors lluiten contra aquestes adversitats, que demanen noves innovacions en arquitectures i metodologies de disseny. Coarse-grained reconfigurable architectures (CGRAs) estan emergent com a candidats potencials per superar totes aquestes dificultats. Diferents tipus d’arquitectures han estat presentades en els últims anys. L’alta granularitat redueix molt el retard, l’àrea, el consum i el temps de configuració comparant amb les FPGAs. D’altra banda, en comparació amb els tradicionals processadors coarse-grained programables, els alts recursos computacionals els permet d’assolir un alt nivell de paral•lelisme i eficiència. No obstant, els CGRAs existents no estant sent aplicats principalment per les grans dificultats en la programació per arquitectures complexes. ADRES és una nova CGRA dissenyada per I’Interuniversity Micro-Electronics Center (IMEC). Combina un processador very-long instruction word (VLIW) i un coarse-grained array per tenir dues opcions diferents en un mateix dispositiu físic. Entre els seus avantatges destaquen l’alta qualitat, poca redundància en les comunicacions i la facilitat de programació. Finalment ADRES és un patró enlloc d’una arquitectura concreta. Amb l’ajuda del compilador DRESC (Dynamically Reconfigurable Embedded System Compile), és possible trobar millors arquitectures o arquitectures específiques segons l’aplicació. Aquest treball presenta la implementació d’un codificador MPEG-4 per l’ADRES. Mostra l’evolució del codi per obtenir una bona implementació per una arquitectura donada. També es presenten les característiques principals d’ADRES i el seu compilador (DRESC). Els objectius són de reduir al màxim el nombre de cicles (temps) per implementar el codificador de MPEG-4 i veure les diferents dificultats de treballar en l’entorn ADRES. Els resultats mostren que els cícles es redueixen en un 67% comparant el codi inicial i final en el mode VLIW i un 84% comparant el codi inicial en VLIW i el final en mode CGA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main objective of this Master’s thesis is to develop a cost allocation model for a leading food industry company in Finland. The goal is to develop an allocation method for fixed overhead expenses produced in a specific production unit and create a plausible tracking system for product costs. The second objective is to construct an allocation model and modify the created model to be suited for other units as well. Costs, activities, drivers and appropriate allocation methods are studied. This thesis is started with literature review of existing theory of ABC, inspecting cost information and then conducting interviews with officials to get a general view of the requirements for the model to be constructed. The familiarization of the company started with becoming acquainted with the existing cost accounting methods. The main proposals for a new allocation model were revealed through interviews, which were utilized in setting targets for developing the new allocation method. As a result of this thesis, an Excel-based model is created based on the theoretical and empiric data. The new system is able to handle overhead costs in more detail improving the cost awareness, transparency in cost allocations and enhancing products’ cost structure. The improved cost awareness is received by selecting the best possible cost drivers for this situation. Also the capacity changes are taken into consideration, such as usage of practical or normal capacity instead of theoretical is suggested to apply. Also some recommendations for further development are made about capacity handling and cost collection.