959 resultados para Implementation level


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the market where companies similar in size and resources are competing, it is challenging to have any advantage over others. In order to stay afloat company needs to have capability to perform with fewer resources and yet provide better service. Hence development of efficient processes which can cut costs and improve performance is crucial. As business expands, processes become complicated and large amount of data needs to be managed and available on request. Different tools are used in companies to store and manage data, which facilitates better production and transactions. In the modern business world the most utilized tool for that purpose is ERP - Enterprise Resource Planning system. The focus of this research is to study how competitive advantage can be achieved by implementing proprietary ERP system in the company; ERP system that is in-house created, tailor made to match and align business needs and processes. Market is full of ERP software, but choosing the right one is a big challenge. Identifying the key features that need improvement in processes and data management, choosing the right ERP, implementing it and the follow-up is a long and expensive journey companies undergo. Some companies prefer to invest in a ready-made package bought from vendor and adjust it according to own business needs, while others focus on creating own system with in-house IT capabilities. In this research a case company is used and author tries to identify and analyze why organization in question decided to pursue the development of proprietary ERP system, how it has been implemented and whether it has been successful. Main conclusion and recommendation of this research is for companies to know core capabilities and constraints before choosing and implementing ERP system. Knowledge of factors that affect system change outcome is important, to make the right decisions on strategic level and implement on operational level. Duration of the project in the case company has lasted longer than anticipated. It has been reported that in cases of buying ready product from vendor, projects are delayed and completed over budget as well. In general, in case company implementation of proprietary ERP has been successful both from business performance figures and usability of system by employees. In terms of future research, conducting a study to calculate statistically ROI of both approaches; of buying ready product and creating own ERP will be beneficial.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, when most of the business are moving forward to sustainability by providing or getting different services from different vendors, Service Level Agreement (SLA) becomes very important for both the business providers/vendors and as well as for users/customers. There are many ways to inform users/customers about various services with its inherent execution functionalities and even non-functional/Quality of Services (QoS) aspects through negotiating, evaluating or monitoring SLAs. However, these traditional SLA actually do not cover eco-efficient green issues or IT ethics issues for sustainability. That is why green SLA (GSLA) should come into play. GSLA is a formal agreement incorporating all the traditional commitments as well as green issues and ethics issues in IT business sectors. GSLA research would survey on different traditional SLA parameters for various services like as network, compute, storage and multimedia in IT business areas. At the same time, this survey could focus on finding the gaps and incorporation of these traditional SLA parameters with green issues for all these mentioned services. This research is mainly points on integration of green parameters in existing SLAs, defining GSLA with new green performance indicators and their measurable units. Finally, a GSLA template could define compiling all the green indicators such as recycling, radio-wave, toxic material usage, obsolescence indication, ICT product life cycles, energy cost etc for sustainable development. Moreover, people’s interaction and IT ethics issues such as security and privacy, user satisfaction, intellectual property right, user reliability, confidentiality etc could also need to add for proposing a new GSLA. However, integration of new and existing performance indicators in the proposed GSLA for sustainable development could be difficult for ICT engineers. Therefore, this research also discovers the management complexity of proposed green SLA through designing a general informational model and analyses of all the relationships, dependencies and effects between various newly identified services under sustainability pillars. However, sustainability could only be achieved through proper implementation of newly proposed GSLA, which largely depends on monitoring the performance of the green indicators. Therefore, this research focuses on monitoring and evaluating phase of GSLA indicators through the interactions with traditional basic SLA indicators, which would help to achieve proper implementation of future GSLA. Finally, this newly proposed GSLA informational model and monitoring aspects could definitely help different service providers/vendors to design their future business strategy in this new transitional sustainable society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Laser cutting implementation possibilities into paper making machine was studied as the main objective of the work. Laser cutting technology application was considered as a replacement tool for conventional cutting methods used in paper making machines for longitudinal cutting such as edge trimming at different paper making process and tambour roll slitting. Laser cutting of paper was tested in 70’s for the first time. Since then, laser cutting and processing has been applied for paper materials with different level of success in industry. Laser cutting can be employed for longitudinal cutting of paper web in machine direction. The most common conventional cutting methods include water jet cutting and rotating slitting blades applied in paper making machines. Cutting with CO2 laser fulfils basic requirements for cutting quality, applicability to material and cutting speeds in all locations where longitudinal cutting is needed. Literature review provided description of advantages, disadvantages and challenges of laser technology when it was applied for cutting of paper material with particular attention to cutting of moving paper web. Based on studied laser cutting capabilities and problem definition of conventional cutting technologies, preliminary selection of the most promising application area was carried out. Laser cutting (trimming) of paper web edges in wet end was estimated to be the most promising area where it can be implemented. This assumption was made on the basis of rate of web breaks occurrence. It was found that up to 64 % of total number of web breaks occurred in wet end, particularly in location of so called open draws where paper web was transferred unsupported by wire or felt. Distribution of web breaks in machine cross direction revealed that defects of paper web edge was the main reason of tearing initiation and consequent web break. The assumption was made that laser cutting was capable of improvement of laser cut edge tensile strength due to high cutting quality and sealing effect of the edge after laser cutting. Studies of laser ablation of cellulose supported this claim. Linear energy needed for cutting was calculated with regard to paper web properties in intended laser cutting location. Calculated linear cutting energy was verified with series of laser cutting. Practically obtained laser energy needed for cutting deviated from calculated values. This could be explained by difference in heat transfer via radiation in laser cutting and different absorption characteristics of dry and moist paper material. Laser cut samples (both dry and moist (dry matter content about 25-40%)) were tested for strength properties. It was shown that tensile strength and strain break of laser cut samples are similar to corresponding values of non-laser cut samples. Chosen method, however, did not address tensile strength of laser cut edge in particular. Thus, the assumption of improving strength properties with laser cutting was not fully proved. Laser cutting effect on possible pollution of mill broke (recycling of trimmed edge) was carried out. Laser cut samples (both dry and moist) were tested on the content of dirt particles. The tests revealed that accumulation of dust particles on the surface of moist samples can take place. This has to be taken into account to prevent contamination of pulp suspension when trim waste is recycled. Material loss due to evaporation during laser cutting and amount of solid residues after cutting were evaluated. Edge trimming with laser would result in 0.25 kg/h of solid residues and 2.5 kg/h of lost material due to evaporation. Schemes of laser cutting implementation and needed laser equipment were discussed. Generally, laser cutting system would require two laser sources (one laser source for each cutting zone), set of beam transfer and focusing optics and cutting heads. In order to increase reliability of system, it was suggested that each laser source would have double capacity. That would allow to perform cutting employing one laser source working at full capacity for both cutting zones. Laser technology is in required level at the moment and do not require additional development. Moreover, capacity of speed increase is high due to availability high power laser sources what can support the tendency of speed increase of paper making machines. Laser cutting system would require special roll to maintain cutting. The scheme of such roll was proposed as well as roll integration into paper making machine. Laser cutting can be done in location of central roll in press section, before so-called open draw where many web breaks occur, where it has potential to improve runability of a paper making machine. Economic performance of laser cutting was done as comparison of laser cutting system and water jet cutting working in the same conditions. It was revealed that laser cutting would still be about two times more expensive compared to water jet cutting. This is mainly due to high investment cost of laser equipment and poor energy efficiency of CO2 lasers. Another factor is that laser cutting causes material loss due to evaporation whereas water jet cutting almost does not cause material loss. Despite difficulties of laser cutting implementation in paper making machine, its implementation can be beneficial. The crucial role in that is possibility to improve cut edge strength properties and consequently reduce number of web breaks. Capacity of laser cutting to maintain cutting speeds which exceed current speeds of paper making machines what is another argument to consider laser cutting technology in design of new high speed paper making machines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Finnish design and consulting companies are delivering robust and cost-efficient steel structures solutions to a large number of manufacturing companies worldwide. Recently introduced EN 1090-2 standard obliges these companies to specify the execution class of steel structures for their customers. This however, requires clarifying, understanding and interpreting the sophisticated procedure of execution class assignment. The objective of this research is to provide a clear explanation and guidance through the process of execution class assignment for a given steel structure and to support the implementation of EN 1090-2 standard in Rejlers Oy, one of Finnish design and consulting companies. This objective is accomplished by creating a guideline for designers that elaborates on the four-step process of the execution class assignment for a steel structure or its part. Steps one to three define the consequence class (projected consequences of structure failure), the service category (hazards associated with the service use exploitation of steel structure) and the production category (manufacturing process peculiarities), based on the ductility class (capacity of structure to withstand deformations) and the behaviour factor (corresponds to structure seismic behaviour). The final step is the execution class assignment taking into account results of previous steps. Main research method is indepth literature review of European standards family for steel structures. Other research approach is a series of interviews of Rejlers Oy representatives and its clients, results of which have been used to evaluate the level of EN 1090-2 awareness. Rejlers Oy will use the developed novel coherent standard implementation guideline to improve its services and to obtain greater customer satisfaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the past decades, educational large-scale reforms have been elaborated and implemented in many countries and often resulted in partial or complete failure. These results brought researchers to study policy processes in order to address this particular challenge. Studies on implementation processes brought to light an existing causal relationship between the implementation process and the effectiveness of a reform. This study aims to describe the implementation process of educational change in Finland, who produced efficient educational reforms over the last 50 years. The case study used for the purpose of this study is the national reform of undivided basic education (yhtenäinen peruskoulu) implemented in the end of the 1990s. Therefore, this research aims to describe how the Finnish undivided basic education reform was implemented. This research was carried out using a pluralist and structuralist approach of policy process and was analyzed according to the hybrid model of implementation process. The data were collected using a triangulation of methods, i.e. documentary research, interviews and questionnaires. The data were qualitative and were analyzed using content analysis methods. This study concludes that the undivided basic education reform was applied in a very decentralized manner, which is a reflection of the decentralized system present in Finland. Central authorities provided a clear vision of the purpose of the reform, but did not control the implementation process. They rather provided extensive support in the form of transmission of information and development of collaborative networks. Local authorities had complete autonomy in terms of decision-making and implementation process. Discussions, debates and decisions regarding implementation processes took place at the local level and included the participation of all actors present on the field. Implementation methods differ from a region to another, with is the consequence of the variation of the level of commitment of local actors but also the diversity of local realities. The reform was implemented according to existing structures and values, which means that it was in cohesion with the context in which it was implemented. These results cannot be generalized to all implementation processes of educational change in Finland but give a great insight of what could be the model used in Finland. Future studies could intent to confirm the model described here by studying other reforms that took place in Finland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the past decades, educational large-scale reforms have been elaborated and implemented in many countries and often resulted in partial or complete failure. These results brought researchers to study policy processes in order to address this particular challenge. Studies on implementation processes brought to light an existing causal relationship between the implementation process and the effectiveness of a reform. This study aims to describe the implementation process of educational change in Finland, who produced efficient educational reforms over the last 50 years. The case study used for the purpose of this study is the national reform of undivided basic education (yhtenäinen peruskoulu) implemented in the end of the 1990s. Therefore, this research aims to describe how the Finnish undivided basic education reform was implemented. This research was carried out using a pluralist and structuralist approach of policy process and was analyzed according to the hybrid model of implementation process. The data were collected using a triangulation of methods, i.e. documentary research, interviews and questionnaires. The data were qualitative and were analyzed using content analysis methods. This study concludes that the undivided basic education reform was applied in a very decentralized manner, which is a reflection of the decentralized system present in Finland. Central authorities provided a clear vision of the purpose of the reform, but did not control the implementation process. They rather provided extensive support in the form of transmission of information and development of collaborative networks. Local authorities had complete autonomy in terms of decision-making and implementation process. Discussions, debates and decisions regarding implementation processes took place at the local level and included the participation of all actors present on the field. Implementation methods differ from a region to another, with is the consequence of the variation of the level of commitment of local actors but also the diversity of local realities. The reform was implemented according to existing structures and values, which means that it was in cohesion with the context in which it was implemented. These results cannot be generalized to all implementation processes of educational change in Finland but give a great insight of what could be the model used in Finland. Future studies could intent to confirm the model described here by studying other reforms that took place in Finland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This qualitative study explored secondary teachers' perceptions of scheduling in relation to pedagogy, curriculum, and observation of student learning. Its objective was to determine the best way to organize the scheduling for the delivery of Ontario's new 4-year curriculum. Six participants were chosen. Two were teaching in a semestered timetable, 1 in a traditional timetable, and 3 had experience in both schedules. Participants related a pressure cooker "lived experience" with weaker students in the semester system experiencing a particularly harsh environment. The inadequate amount of time for review in content-heavy courses, gap scheduling problems, catch-up difficulties for students missing classes, and the fast pace of semestering are identified as factors negatively impacting on these students. Government testing adds to the pressure by shifting teachers' time and attention in the classroom from deeper learning to a superficial coverage of material, from curriculum as lived to curriculum as text to be covered. Scheduling choice should be available in public education to accommodate the needs of all students. Curriculum guidelines need to be revamped to reflect the content that teachers believe is necessary for a successful course delivery. Applied level courses need to be developed for students who are not academically inferior but learn differently.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study had three purposes related to the effective implem,entation and practice of computer-mediated online distance education (C-MODE) at the elementary level: (a) To identify a preliminary framework of criteria 'or guidelines for effective implementation and practice, (b) to identify areas ofC-MODE for which criteria or guidelines of effectiveness have not yet been developed, and (c) to develop an implementation and practice criteria questionnaire based on a review of the distance education literature, and to use the questionnaire in an exploratory survey of elementary C-MODE practitioners. Using the survey instrument, the beliefs and attitudes of 16 elementary C'- MODE practitioners about what constitutes effective implementation and practice principles were investigated. Respondents, who included both administrators and instructors, provided information about themselves and the program in which they worked. They rated 101 individual criteria statenlents on a 5 point Likert scale with a \. point range that included the values: 1 (Strongly Disagree), 2 (Disagree), 3 (Neutral or Undecided), 4 (Agree), 5 (Strongly Agree). Respondents also provided qualitative data by commenting on the individual statements, or suggesting other statements they considered important. Eighty-two different statements or guidelines related to the successful implementation and practice of computer-mediated online education at the elementary level were endorsed. Response to a small number of statements differed significantly by gender and years of experience. A new area for investigation, namely, the role ofparents, which has received little attention in the online distance education literature, emerged from the findings. The study also identified a number of other areas within an elementary context where additional research is necessary. These included: (a) differences in the factors that determine learning in a distance education setting and traditional settings, (b) elementary students' ability to function in an online setting, (c) the role and workload of instructors, (d) the importance of effective, timely communication with students and parents, and (e) the use of a variety of media.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This qualitative study examines teachers' experiences implementing new standardized curricula in Ontario schools. This new curricula contained several policy changes and an expectations based format which directed what knowledge and skills students were to demonstrate in each subject. This level of specificity of subject-content served to control teachers in relation to curricula; however, data suggested that at the same time, teachers had enormous flexibility in terms of pedagogy. Four secondary teachers who were implementing a Grade 10 course in the 2000-2001 school year participated in the study. The qualitative framework supported the researcher's emphasis on examining the participants' perspectives on the implementation of expectation-based curricula. Data collected included transcripts from interviews conducted with teacher participants and a representative of the Ontario Ministry of Education and Training, field notes, and a research journal. Many of the factors often cited in the literature as influencing implementation practices were found to have affected the participants' experiences of curriculum implementation: time, professional development, and teachers' beliefs, particularly concerning students. In addition, the format of the policy documents proved to both control and free teachers during the implementation process. Participants believed that the number of specific expectations did not provide them an opportunity to add content to the curriculum; at the same time, teachers also noted that the general format of the policy document allowed them to direct instruction to match students' needs and their own teaching preferences. Alignment between teachers' beliefs about education and their understanding of the new curriculum affected the ways in which many participants adapted during the implementation process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 2004, the Ontario Ministry of Health Promotion and Sport (MHPS) established Active2010: Ontario’s Sport and Physical Activity Strategy. Active2010 demonstrates a strong provincial government policy emphasis regarding sport participation and physical activity (PA), and identifies the school system as a primary vehicle for enhancing PA levels. This study examines the sport and PA initiatives MHPS is undertaking within the school system. Theoretical context regarding neo-liberalism in Canada and Canadian sport frames this study, while a revised version of Van Meter and Van Horn’s (1975) top-down model of policy implementation guides the research process. A case study of the school-based PA system is conducted which relies on the analysis of 11 semi-structured interviews and 47 official organizational documents. Four emergent categories of Jurisdictional Funding, Coercive Policy, Sector Silos, and Community Champions are identified. Additional insight is provided regarding neo-liberalism, provincial level government, interministerial collaboration, and government/non-profit sector partnership.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of my research was to develop and refine pedagogic approaches, and establish fitness baselines to adapt fitness and conditioning programs for Moderate-functioning ASD individuals. I conducted a seven-week study with two teens and two trainers. The trainers implemented individualized fitness and conditioning programs that I developed. I conducted pre and post fitness baselines for each teen, a pre and post study interview with the trainers, and recorded semi-structured observations during each session. I used multi-level, within-case and across case analyses, working inductively and deductively. My findings indicated that fundamental movement concepts can be used to establish fitness baselines and develop individualized fitness programs. I tracked and evaluated progressions and improvements using conventional measurements applied to unconventional movements. This process contributed to understanding and making relevant modifications to activities as effective pedagogic strategies for my trainers. Further research should investigate fitness and conditioning programs with lower functioning ASD individuals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: This article aims to show an alternative intervention for the prevention and control of back pain to the people of a production plant of geotextiles for the construction exposed to handling and awkward postures through the implementation of the Back School using the CORE technique. This technique being understood as trainer of the stability musculature of the spine; whose benefit is proportionate the muscular complex of the back, stability and avoid osteomuscular lesions and improved posture. Objective: To present the results about the implementation of the back school by the CORE technique for prevention of back pain in a population of forty-eight male collaborators. Materials and methods: The back school began with talks of awareness by the occupational health physician explaining the objectives and benefits of it to all participants. Once this activity was done, was continued to evaluate all plant employees to establish health status through the PAR-Q questionnaire, who were surveyed for the perception of pain using visual analog scale (VAS) and stability was determined column through the CORE assessment, to determine the training plan. Then, were made every six months the revaluations and implementation of a survey of assistant public perception to identify the impact of the implementation of the school back on the two variables referred (pain perception and stability of column). Results: The pain perception according VAS increased in the number of workers asymptomatic in 12% and based in the satisfaction survey 94% of population reported that with the development of this technique decrease the muscle fatigue in lumbar level; and 96% of population reported an improvement in the performance of their work activities. Discussion: Posterior to the analysis of all results, it is interpreted that back schools practice through CORE technique, contributes to the prevention and / or control of symptoms at the lumbar level in population of productive sector exposed to risks derived from the physical load, provided that ensure its continuously development and supervised for a competent professional.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La present tesi pretén recollir l'experiència viscuda en desenvolupar un sistema supervisor intel·ligent per a la millora de la gestió de plantes depuradores d'aigües residuals., implementar-lo en planta real (EDAR Granollers) i avaluar-ne el funcionament dia a dia amb situacions típiques de la planta. Aquest sistema supervisor combina i integra eines de control clàssic de les plantes depuradores (controlador automàtic del nivell d'oxigen dissolt al reactor biològic, ús de models descriptius del procés...) amb l'aplicació d'eines del camp de la intel·ligència artificial (sistemes basats en el coneixement, concretament sistemes experts i sistemes basats en casos, i xarxes neuronals). Aquest document s'estructura en 9 capítols diferents. Hi ha una primera part introductòria on es fa una revisió de l'estat actual del control de les EDARs i s'explica el perquè de la complexitat de la gestió d'aquests processos (capítol 1). Aquest capítol introductori juntament amb el capítol 2, on es pretén explicar els antecedents d'aquesta tesi, serveixen per establir els objectius d'aquest treball (capítol 3). A continuació, el capítol 4 descriu les peculiaritats i especificitats de la planta que s'ha escollit per implementar el sistema supervisor. Els capítols 5 i 6 del present document exposen el treball fet per a desenvolupar el sistema basat en regles o sistema expert (capítol 6) i el sistema basat en casos (capítol 7). El capítol 8 descriu la integració d'aquestes dues eines de raonament en una arquitectura multi nivell distribuïda. Finalment, hi ha una darrer capítol que correspon a la avaluació (verificació i validació), en primer lloc, de cadascuna de les eines per separat i, posteriorment, del sistema global en front de situacions reals que es donin a la depuradora

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The characteristics of service independence and flexibility of ATM networks make the control problems of such networks very critical. One of the main challenges in ATM networks is to design traffic control mechanisms that enable both economically efficient use of the network resources and desired quality of service to higher layer applications. Window flow control mechanisms of traditional packet switched networks are not well suited to real time services, at the speeds envisaged for the future networks. In this work, the utilisation of the Probability of Congestion (PC) as a bandwidth decision parameter is presented. The validity of PC utilisation is compared with QOS parameters in buffer-less environments when only the cell loss ratio (CLR) parameter is relevant. The convolution algorithm is a good solution for CAC in ATM networks with small buffers. If the source characteristics are known, the actual CLR can be very well estimated. Furthermore, this estimation is always conservative, allowing the retention of the network performance guarantees. Several experiments have been carried out and investigated to explain the deviation between the proposed method and the simulation. Time parameters for burst length and different buffer sizes have been considered. Experiments to confine the limits of the burst length with respect to the buffer size conclude that a minimum buffer size is necessary to achieve adequate cell contention. Note that propagation delay is a no dismiss limit for long distance and interactive communications, then small buffer must be used in order to minimise delay. Under previous premises, the convolution approach is the most accurate method used in bandwidth allocation. This method gives enough accuracy in both homogeneous and heterogeneous networks. But, the convolution approach has a considerable computation cost and a high number of accumulated calculations. To overcome this drawbacks, a new method of evaluation is analysed: the Enhanced Convolution Approach (ECA). In ECA, traffic is grouped in classes of identical parameters. By using the multinomial distribution function instead of the formula-based convolution, a partial state corresponding to each class of traffic is obtained. Finally, the global state probabilities are evaluated by multi-convolution of the partial results. This method avoids accumulated calculations and saves storage requirements, specially in complex scenarios. Sorting is the dominant factor for the formula-based convolution, whereas cost evaluation is the dominant factor for the enhanced convolution. A set of cut-off mechanisms are introduced to reduce the complexity of the ECA evaluation. The ECA also computes the CLR for each j-class of traffic (CLRj), an expression for the CLRj evaluation is also presented. We can conclude that by combining the ECA method with cut-off mechanisms, utilisation of ECA in real-time CAC environments as a single level scheme is always possible.