905 resultados para Performance evaluation
Resumo:
Sähkömarkkinoiden vapautumisen jälkeen energia-alalle on muodostunut entistä suurempi kysyntä kehittyneille energiatiedon hallintaaan erikoistuneille tietojärjestelmille. Uudet lakisäädökset sekä tulevaisuuden kokonaisvaltaiset tiedonkeruujärjestelmät, kuten älykkäät mittarit ja älykkäät sähköverkot, tuovat mukanaan entistä suuremman prosessoitavan tietovirran. Nykyaikaisen energiatietojärjestelmän on kyettävä vastaamaan haasteeseen ja palveltava asiakkaan vaatimuksia tehokkaasti prosessien suorituskyvyn kärsimättä. Tietojärjestelmän prosessien on oltava myös skaalautuvia, jotta tulevaisuuden lisääntyneet prosessointitarpeet ovat hallittavissa. Tässä työssä kuvataan nykyaikaisen energiatietojärjestelmän keskeiset energiatiedon hallintaan ja varastointiin liittyvät komponentit. Työssä esitellään myös älykkäiden mittareiden perusperiaate ja niiden tuomat edut energia-alalla. Lisäksi työssä kuvataan visioita tulevaisuuden älykkäiden sähköverkkojen toteutusmahdollisuuksista. Diplomityössä esitellään keskeisiä suorituskykyyn liittyviä kokonaisuuksia. Lisäksi työssä kuvataan keskeiset suorituskyvyn mittarit sekä suorituskykyvaatimukset. Järjestelmän suorituskyvyn arvioinnin toteuttamiseen on erilaisia menetelmiä, joista tässä työssä kuvataan yksi sen keskeisine periaatteineen. Suorituskyvyn analysointiin käytetään erilaisia tekniikoita, joista tässä diplomityössä esitellään tarkemmin järjestelmän mittaus. Työssä toteutetaan myös case-tutkimus, jossa analysoidaan mittaustiedon sisääntuontiin käytettävän prosessin kahta eri kehitysversiota ja näiden suorituskykyominaisuuksia. Kehitysversioiden vertailussa havaitaan, että uusi versio on selkeästi edellistä versiota nopeampi. Case-tutkimuksessa määritetään myös suorituskyvyn kannalta optimaalinen rinnakkaisprosessien määrä ja tutkitaan prosessin skaalautuvuutta. Tutkimuksessa todetaan, että uusi kehitysversio skaalautuu lineaarisesti.
Resumo:
Airlift reactors are pneumatically agitated reactors that have been widely used in chemical, petrochemical, and bioprocess industries, such as fermentation and wastewater treatment. Computational Fluid Dynamics (CFD) has become more popular approach for design, scale-up and performance evaluation of such reactors. In the present work numerical simulations for internal-loop airlift reactors were performed using the transient Eulerian model with CFD package, ANSYS Fluent 12.1. The turbulence in the liquid phase is described using κ- ε the model. Global hydrodynamic parameters like gas holdup, gas velocity and liquid velocity have been investigated for a range of superficial gas velocities, both with 2D and 3D simulations. Moreover, the study of geometry and scale influence on the reactor have been considered. The results suggest that both, geometry and scale have significant effects on the hydrodynamic parameters, which may have substantial effects on the reactor performance. Grid refinement and time-step size effect have been discussed. Numerical calculations with gas-liquid-solid three-phase flow system have been carried out to investigate the effect of solid loading, solid particle size and solid density on the hydrodynamic characteristics of internal loop airlift reactor with different superficial gas velocities. It was observed that averaged gas holdup is significantly decreased with increasing slurry concentration. Simulations show that the riser gas holdup decreases with increase in solid particle diameter. In addition, it was found that the averaged solid holdup increases in the riser section with the increase of solid density. These produced results reveal that CFD have excellent potential to simulate two-phase and three-phase flow system.
Resumo:
Hydrological models are important tools that have been used in water resource planning and management. Thus, the aim of this work was to calibrate and validate in a daily time scale, the SWAT model (Soil and Water Assessment Tool) to the watershed of the Galo creek , located in Espírito Santo State. To conduct the study we used georeferenced maps of relief, soil type and use, in addition to historical daily time series of basin climate and flow. In modeling were used time series corresponding to the periods Jan 1, 1995 to Dec 31, 2000 and Jan 1, 2001 to Dec 20, 2003 for calibration and validation, respectively. Model performance evaluation was done using the Nash-Sutcliffe coefficient (E NS) and the percentage of bias (P BIAS). SWAT evaluation was also done in the simulation of the following hydrological variables: maximum and minimum annual daily flowsand minimum reference flows, Q90 and Q95, based on mean absolute error. E NS and P BIAS were, respectively, 0.65 and 7.2% and 0.70 and 14.1%, for calibration and validation, indicating a satisfactory performance for the model. SWAT adequately simulated minimum annual daily flow and the reference flows, Q90 and Q95; it was not suitable in the simulation of maximum annual daily flows.
Resumo:
Few people see both opportunities and threats coming from IT legacy in current world. On one hand, effective legacy management can bring substantial hard savings and smooth transition to the desired future state. On the other hand, its mismanagement contributes to serious operational business risks, as old systems are not as reliable as it is required by the business users. This thesis offers one perspective of dealing with IT legacy – through effective contract management, as a component towards achieving Procurement Excellence in IT, thus bridging IT delivery departments, IT procurement, business units, and suppliers. It developed a model for assessing the impact of improvements on contract management process and set of tools and advices with regards to analysis and improvement actions. The thesis conducted case study to present and justify the implementation of Lean Six Sigma in IT legacy contract management environment. Lean Six Sigma proved to be successful and this thesis presents and discusses all the steps necessary, and pitfalls to avoid, to achieve breakthrough improvement in IT contract management process performance. For the IT legacy contract management process two improvements require special attention and can be easily copied to any organization. First is the issue of diluted contract ownership that stops all the improvements, as people do not know who is responsible for performing those actions. Second is the contract management performance evaluation tool, which can be used for monitoring, identifying outlying contracts and opportunities for improvements in the process. The study resulted in a valuable insight on the benefits of applying Lean Six Sigma to improve IT legacy contract management, as well as on how Lean Six Sigma can be applied in IT environment. Managerial implications are discussed. It is concluded that the use of data-driven Lean Six Sigma methodology for improving the existing IT contract management processes is a significant addition to the existing best practices in contract management.
Resumo:
Lignocellulosic biomasses (e.g., wood and straws) are a potential renewable source for the production of a wide variety of chemicals that could be used to replace those currently produced by petrochemical industry. This would lead to lower greenhouse gas emissions and waste amounts, and to economical savings. There are many possible pathways available for the manufacturing of chemicals from lignocellulosic biomasses. One option is to hydrolyze the cellulose and hemicelluloses of these biomasses into monosaccharides using concentrated sulfuric acid as catalyst. This process is an efficient method for producing monosaccharides which are valuable platforn chemicals. Also other valuable products are formed in the hydrolysis. Unfortunately, the concentrated acid hydrolysis has been deemed unfeasible mainly due to high chemical consumption resulting from the need to remove sulfuric acid from the obtained hydrolysates prior to the downstream processing of the monosaccharides. Traditionally, this has been done by neutralization with lime. This, however, results in high chemical consumption. In addition, the by-products formed in the hydrolysis are not removed and may, thus, hinder the monosaccharide processing. In order to improve the feasibility of the concentrated acid hydrolysis, the chemical consumption should be decreased by recycling of sulfuric acid without neutralization. Furthermore, the monosaccharides and the other products formed in the hydrolysis should be recovered selectively for efficient downstream processing. The selective recovery of the hydrolysis by-products would have additional economical benefits on the process due to their high value. In this work, the use of chromatographic fractionation for the recycling of sulfuric acid and the selective recovery of the main components from the hydrolysates formed in the concentrated acid hydrolysis was investigated. Chromatographic fractionation based on the electrolyte exclusion with gel type strong acid cation exchange resins in acid (H+) form as a stationary phase was studied. A systematic experimental and model-based study regarding the separation task at hand was conducted. The phenomena affecting the separation were determined and their effects elucidated. Mathematical models that take accurately into account these phenomena were derived and used in the simulation of the fractionation process. The main components of the concentrated acid hydrolysates (sulfuric acid, monosaccharides, and acetic acid) were included into this model. Performance of the fractionation process was investigated experimentally and by simulations. Use of different process options was also studied. Sulfuric acid was found to have a significant co-operative effect on the sorption of the other components. This brings about interesting and beneficial effects in the column operations. It is especially beneficial for the separation of sulfuric acid and the monosaccharides. Two different approaches for the modelling of the sorption equilibria were investigated in this work: a simple empirical approach and a thermodynamically consistent approach (the Adsorbed Solution theory). Accurate modelling of the phenomena observed in this work was found to be possible using the simple empirical models. The use of the Adsorbed Solution theory is complicated by the nature of the theory and the complexity of the studied system. In addition to the sorption models, a dynamic column model that takes into account the volume changes of the gel type resins as changing resin bed porosity was also derived. Using the chromatography, all the main components of the hydrolysates can be recovered selectively, and the sulfuric acid consumption of the hydrolysis process can be lowered considerably. Investigation of the performance of the chromatographic fractionation showed that the highest separation efficiency in this separation task is obtained with a gel type resin with a high crosslinking degree (8 wt. %); especially when the hydrolysates contain high amounts of acetic acid. In addition, the concentrated acid hydrolysis should be done with as low sulfuric acid concentration as possible to obtain good separation performance. The column loading and flow rate also have large effects on the performance. In this work, it was demonstrated that when recycling of the fractions obtained in the chromatographic fractionation are recycled to preceding unit operations these unit operations should included in the performance evaluation of the fractionation. When this was done, the separation performance and the feasibility of the concentrated acid hydrolysis process were found to improve considerably. Use of multi-column chromatographic fractionation processes, the Japan Organo process and the Multi-Column Recycling Chromatography process, was also investigated. In the studied case, neither of these processes could compete with the single-column batch process in the productivity. However, due to internal recycling steps, the Multi-Column Recycling Chromatography was found to be superior to the batch process when the product yield and the eluent consumption were taken into account.
Resumo:
The purpose of this paper is to examine the stability and predictive abilities of the beta coefficients of individual equities in the Finnish stock market. As beta is widely used in several areas of finance, including risk management, asset pricing and performance evaluation among others, it is important to understand its characteristics and find out whether its estimates can be trusted and utilized.
Resumo:
Dans le domaine des évaluations des performances des élèves en fin de scolarité obligatoire, et à côté des traditionnelles évaluations scolaires, s’est développé le Programme International pour le Suivi des Acquis (PISA), harmonisé par l’Organisation de Coopération et de Développements Économiques (l’OCDE). Ce programme a atteint une grande notoriété internationale et tente de s’imposer comme programme qui évalue les compétences des élèves. Ce mémoire explore dans quelle mesure les évaluations PISA permettent de prédire les performances scolaires des élèves lors de la transition de la fin des études secondaires vers les études collégiales au Québec. Nous avons construit une variable mesurant l’évolution des performances scolaires entre le secondaire et le Cégep. Nos résultats tendent à confirmer que les évaluations PISA sont en mesure de prédire en partie la continuité des bonnes performances scolaires après contrôle des variables contextuelles des parcours de vie. Cependant, les évaluations scolaires antérieures expliquent mieux cette continuité des bonnes performances scolaires réalisées en première année de postsecondaire que les évaluations PISA. Néanmoins, toujours après contrôle des variables contextuelles, les évaluations scolaires antérieures ne sont pas capables de prédire la différence entre des performances scolaires faibles et fortes lors de la transition secondaire-collégial. Seules les évaluations PISA conservent une faible part pour expliquer ces différences.
Resumo:
L’appropriation du feed-back a fait l’objet de plusieurs modèles théoriques en contexte d’évaluation de la performance, notamment par Ilgen, Fisher et Taylor (1979) qui suggèrent un modèle explicitant comment un feed-back en vient à susciter des changements comportementaux. Ce modèle a été repris dans divers domaines de recherche, sans pour autant être adapté en fonction du contexte spécifique dans lequel le feed-back était transmis. Cette thèse propose un modèle d’appropriation du feed-back inspiré des travaux d’Ilgen et al. (1979), mais reflétant les spécificités de l’évaluation du potentiel. Le modèle comporte trois étapes qui sont l’appropriation cognitive (composée de l’acceptation et de la conscientisation), l’intention d’agir dans le sens du feed-back et l’appropriation comportementale. La présente thèse se décompose en trois articles poursuivant les objectifs suivants : (1) Proposer un modèle théorique de l’appropriation du feed-back adapté au contexte de l’évaluation du potentiel. (2) Valider une mesure de l’appropriation cognitive combinant l’acceptation et la conscientisation. (3) Tester empiriquement le modèle d’appropriation du feed-back en contexte d’évaluation du potentiel. Le premier article vise d’abord à circonscrire les fondements de l’évaluation du potentiel et à définir l’appropriation du feed-back. Sur la base de ces informations, le modèle d’Ilgen et al. (1979) est ensuite revu et modifié. Les liens entre les différentes étapes du modèle sont subséquemment étayés par des théories et des études empiriques. L’article se conclue par une réflexion sur les retombées théoriques et pratiques du modèle revisité. L’objectif du second article consiste à développer et valider une mesure de l’appropriation cognitive incluant deux dimensions, à savoir l’acceptation et la conscientisation. Pour ce faire, deux études ont été menées auprès de candidats ayant reçu un feed-back suite à une évaluation du potentiel. Des analyses factorielles exploratoires (N = 111), puis des analyses factorielles confirmatoires (N = 178) ont été réalisées en ce sens. Chaque dimension de l’appropriation cognitive a également été mise en relation avec des variables critères afin de recueillir des éléments de preuve appuyant la validité de l’instrument. La majorité des indices obtenus confirment la présence des deux dimensions pressenties et des constats sont tirés sur la base de ces résultats. Le troisième article vise à vérifier empiriquement les liens anticipés entre les composantes du modèle d’appropriation du feed-back détaillé dans le premier article. Les deux premières étapes du modèle, soit l’appropriation cognitive et l’intention d’agir, ont été mesurées via un questionnaire complété immédiatement après le feed-back par 178 candidats. Ces derniers ont été sollicités trois mois plus tard afin de compléter un second questionnaire portant sur la dernière étape, l’appropriation comportementale, et 97 d’entre eux y ont répondu. Les résultats d’analyses par équations structurelles supportent le modèle et une discussion sur la portée de tels résultats s’en suit.
Resumo:
Cette recherche documente les façons de faire l’évaluation formative d’enseignants de français du Lycée exerçant en contexte de classes pléthoriques au Sénégal. Le choix récent dans ce pays de l’approche par compétences invite à privilégier cette fonction de l’évaluation, au regard de son potentiel pour la progression des apprentissages des élèves (Allal & Mottier Lopez, 2005; Black & Wiliam, 2009; Morrissette, 2010). Cependant, les orientations ministérielles concernant sa mise en œuvre sont très générales, et jusqu’ici, la recherche a laissé dans l’ombre son application en contexte de classes pléthoriques. Puisant au domaine des savoirs pratiques (Schön, 1983) et à une vision interactive et située de l’évaluation formative (Mottier Lopez, 2007; Morrissette, 2010), j’ai conduit une démarche de recherche collaborative auprès de 14 enseignants de français exerçant dans le même Lycée, ponctuée par 6 entretiens de groupe. Un premier registre d’analyse a décrit des façons de faire rattachées à trois dimensions de la pratique de l’évaluation formative: l’analyse du contexte de la pratique, la construction négociée du savoir et la gestion de l’effectif. Un second registre d’analyse de leurs façons de faire en contexte d’«étrangeté culturelle» (Douville, 2002) a permis de conceptualiser leur savoir-évaluer en relation avec leur façon d’interpréter les problèmes qui se posent aux élèves, leur conception de l’erreur et leurs manières de réinventer les modes d’accomplissement traditionnels de l’évaluation ancrés dans la culture scolaire.
Resumo:
Queueing system in which arriving customers who find all servers and waiting positions (if any) occupied many retry for service after a period of time are retrial queues or queues with repeated attempts. This study deals with two objectives one is to introduce orbital search in retrial queueing models which allows to minimize the idle time of the server. If the holding costs and cost of using the search of customers will be introduced, the results we obtained can be used for the optimal tuning of the parameters of the search mechanism. The second one is to provide insight of the link between the corresponding retrial queue and the classical queue. At the end we observe that when the search probability Pj = 1 for all j, the model reduces to the classical queue and when Pj = 0 for all j, the model becomes the retrial queue. It discusses the performance evaluation of single-server retrial queue. It was determined by using Poisson process. Then it discuss the structure of the busy period and its analysis interms of Laplace transforms and also provides a direct method of evaluation for the first and second moments of the busy period. Then it discusses the M/ PH/1 retrial queue with disaster to the unit in service and orbital search, and a multi-server retrial queueing model (MAP/M/c) with search of customers from the orbit. MAP is convenient tool to model both renewal and non-renewal arrivals. Finally the present model deals with back and forth movement between classical queue and retrial queue. In this model when orbit size increases, retrial rate also correspondingly increases thereby reducing the idle time of the server between services
Resumo:
Extensive use of the Internet coupled with the marvelous growth in e-commerce and m-commerce has created a huge demand for information security. The Secure Socket Layer (SSL) protocol is the most widely used security protocol in the Internet which meets this demand. It provides protection against eaves droppings, tampering and forgery. The cryptographic algorithms RC4 and HMAC have been in use for achieving security services like confidentiality and authentication in the SSL. But recent attacks against RC4 and HMAC have raised questions in the confidence on these algorithms. Hence two novel cryptographic algorithms MAJE4 and MACJER-320 have been proposed as substitutes for them. The focus of this work is to demonstrate the performance of these new algorithms and suggest them as dependable alternatives to satisfy the need of security services in SSL. The performance evaluation has been done by using practical implementation method.
Resumo:
One of the fastest expanding areas of computer exploitation is in embedded systems, whose prime function is not that of computing, but which nevertheless require information processing in order to carry out their prime function. Advances in hardware technology have made multi microprocessor systems a viable alternative to uniprocessor systems in many embedded application areas. This thesis reports the results of investigations carried out on multi microprocessors oriented towards embedded applications, with a view to enhancing throughput and reliability. An ideal controller for multiprocessor operation is developed which would smoothen sharing of routines and enable more powerful and efficient code I data interchange. Results of performance evaluation are appended.A typical application scenario is presented, which calls for classifying tasks based on characteristic features that were identified. The different classes are introduced along with a partitioned storage scheme. Theoretical analysis is also given. A review of schemes available for reducing disc access time is carried out and a new scheme presented. This is found to speed up data base transactions in embedded systems. The significance of software maintenance and adaptation in such applications is highlighted. A novel scheme of prov1d1ng a maintenance folio to system firmware is presented, alongwith experimental results. Processing reliability can be enhanced if facility exists to check if a particular instruction in a stream is appropriate. Likelihood of occurrence of a particular instruction would be more prudent if number of instructions in the set is less. A new organisation is derived to form the basement for further work. Some early results that would help steer the course of the work are presented.
Resumo:
In the present scenario of energy demand overtaking energy supply top priority is given for energy conservation programs and policies. Most of the process plants are operated on continuous basis and consumes large quantities of energy. Efficient management of process system can lead to energy savings, improved process efficiency, lesser operating and maintenance cost, and greater environmental safety. Reliability and maintainability of the system are usually considered at the design stage and is dependent on the system configuration. However, with the growing need for energy conservation, most of the existing process systems are either modified or are in a state of modification with a view for improving energy efficiency. Often these modifications result in a change in system configuration there by affecting the system reliability. It is important that system modifications for improving energy efficiency should not be at the cost of reliability. Any new proposal for improving the energy efficiency of the process or equipments should prove itself to be economically feasible for gaining acceptance for implementation. In order to arrive at the economic feasibility of the new proposal, the general trend is to compare the benefits that can be derived over the lifetime as well as the operating and maintenance costs with the investment to be made. Quite often it happens that the reliability aspects (or loss due to unavailability) are not taken into consideration. Plant availability is a critical factor for the economic performance evaluation of any process plant.The focus of the present work is to study the effect of system modification for improving energy efficiency on system reliability. A generalized model for the valuation of process system incorporating reliability is developed, which is used as a tool for the analysis. It can provide an awareness of the potential performance improvements of the process system and can be used to arrive at the change in process system value resulting from system modification. The model also arrives at the pay back of the modified system by taking reliability aspects also into consideration. It is also used to study the effect of various operating parameters on system value. The concept of breakeven availability is introduced and an algorithm for allocation of component reliabilities of the modified process system based on the breakeven system availability is also developed. The model was applied to various industrial situations.
Resumo:
Learning Disability (LD) is a general term that describes specific kinds of learning problems. It is a neurological condition that affects a child's brain and impairs his ability to carry out one or many specific tasks. The learning disabled children are neither slow nor mentally retarded. This disorder can make it problematic for a child to learn as quickly or in the same way as some child who isn't affected by a learning disability. An affected child can have normal or above average intelligence. They may have difficulty paying attention, with reading or letter recognition, or with mathematics. It does not mean that children who have learning disabilities are less intelligent. In fact, many children who have learning disabilities are more intelligent than an average child. Learning disabilities vary from child to child. One child with LD may not have the same kind of learning problems as another child with LD. There is no cure for learning disabilities and they are life-long. However, children with LD can be high achievers and can be taught ways to get around the learning disability. In this research work, data mining using machine learning techniques are used to analyze the symptoms of LD, establish interrelationships between them and evaluate the relative importance of these symptoms. To increase the diagnostic accuracy of learning disability prediction, a knowledge based tool based on statistical machine learning or data mining techniques, with high accuracy,according to the knowledge obtained from the clinical information, is proposed. The basic idea of the developed knowledge based tool is to increase the accuracy of the learning disability assessment and reduce the time used for the same. Different statistical machine learning techniques in data mining are used in the study. Identifying the important parameters of LD prediction using the data mining techniques, identifying the hidden relationship between the symptoms of LD and estimating the relative significance of each symptoms of LD are also the parts of the objectives of this research work. The developed tool has many advantages compared to the traditional methods of using check lists in determination of learning disabilities. For improving the performance of various classifiers, we developed some preprocessing methods for the LD prediction system. A new system based on fuzzy and rough set models are also developed for LD prediction. Here also the importance of pre-processing is studied. A Graphical User Interface (GUI) is designed for developing an integrated knowledge based tool for prediction of LD as well as its degree. The designed tool stores the details of the children in the student database and retrieves their LD report as and when required. The present study undoubtedly proves the effectiveness of the tool developed based on various machine learning techniques. It also identifies the important parameters of LD and accurately predicts the learning disability in school age children. This thesis makes several major contributions in technical, general and social areas. The results are found very beneficial to the parents, teachers and the institutions. They are able to diagnose the child’s problem at an early stage and can go for the proper treatments/counseling at the correct time so as to avoid the academic and social losses.
Resumo:
A new fast stream cipher, MAJE4 is designed and developed with a variable key size of 128-bit or 256-bit. The randomness property of the stream cipher is analysed by using the statistical tests. The performance evaluation of the stream cipher is done in comparison with another fast stream cipher called JEROBOAM. The focus is to generate a long unpredictable key stream with better performance, which can be used for cryptographic applications.