893 resultados para performance evaluation tool
Resumo:
A conceptual framework for crop production efficiency was derived using thermodynamic efficiency concept, in order to generate a tool for performance evaluation of agricultural systems and to quantify the interference of determining factors on this performance. In Thermodynamics, efficiency is the ratio between the output and input of energy. To establish this relationship in agricultural systems, it was assumed that the input energy is represented by the attainable crop yield, as predicted through simulation models based on environmental variables. The method of FAO's agroecological zones was applied to the assessment of the attainable sugarcane yield, while Instituto Brasileiro de Geografia e Estatística (IBGE) data were used as observed yield. Sugarcane efficiency production in São Paulo state was evaluated in two growing seasons, and its correlation with some physical factors that regulate production was calculated. A strong relationship was identified between crop production efficiency and soil aptitude. This allowed inferring the effect of agribusiness factors on crop production efficiency. The relationships between production efficiency and climatic variables were also quantified and indicated that solar radiation, annual rainfall, water deficiency, and maximum air temperature are the main factors affecting the sugarcane production efficiency.
Resumo:
Molecular monitoring of BCR/ABL transcripts by real time quantitative reverse transcription PCR (qRT-PCR) is an essential technique for clinical management of patients with BCR/ABL-positive CML and ALL. Though quantitative BCR/ABL assays are performed in hundreds of laboratories worldwide, results among these laboratories cannot be reliably compared due to heterogeneity in test methods, data analysis, reporting, and lack of quantitative standards. Recent efforts towards standardization have been limited in scope. Aliquots of RNA were sent to clinical test centers worldwide in order to evaluate methods and reporting for e1a2, b2a2, and b3a2 transcript levels using their own qRT-PCR assays. Total RNA was isolated from tissue culture cells that expressed each of the different BCR/ABL transcripts. Serial log dilutions were prepared, ranging from 100 to 10-5, in RNA isolated from HL60 cells. Laboratories performed 5 independent qRT-PCR reactions for each sample type at each dilution. In addition, 15 qRT-PCR reactions of the 10-3 b3a2 RNA dilution were run to assess reproducibility within and between laboratories. Participants were asked to run the samples following their standard protocols and to report cycle threshold (Ct), quantitative values for BCR/ABL and housekeeping genes, and ratios of BCR/ABL to housekeeping genes for each sample RNA. Thirty-seven (n=37) participants have submitted qRT-PCR results for analysis (36, 37, and 34 labs generated data for b2a2, b3a2, and e1a2, respectively). The limit of detection for this study was defined as the lowest dilution that a Ct value could be detected for all 5 replicates. For b2a2, 15, 16, 4, and 1 lab(s) showed a limit of detection at the 10-5, 10-4, 10-3, and 10-2 dilutions, respectively. For b3a2, 20, 13, and 4 labs showed a limit of detection at the 10-5, 10-4, and 10-3 dilutions, respectively. For e1a2, 10, 21, 2, and 1 lab(s) showed a limit of detection at the 10-5, 10-4, 10-3, and 10-2 dilutions, respectively. Log %BCR/ABL ratio values provided a method for comparing results between the different laboratories for each BCR/ABL dilution series. Linear regression analysis revealed concordance among the majority of participant data over the 10-1 to 10-4 dilutions. The overall slope values showed comparable results among the majority of b2a2 (mean=0.939; median=0.9627; range (0.399 - 1.1872)), b3a2 (mean=0.925; median=0.922; range (0.625 - 1.140)), and e1a2 (mean=0.897; median=0.909; range (0.5174 - 1.138)) laboratory results (Fig. 1-3)). Thirty-four (n=34) out of the 37 laboratories reported Ct values for all 15 replicates and only those with a complete data set were included in the inter-lab calculations. Eleven laboratories either did not report their copy number data or used other reporting units such as nanograms or cell numbers; therefore, only 26 laboratories were included in the overall analysis of copy numbers. The median copy number was 348.4, with a range from 15.6 to 547,000 copies (approximately a 4.5 log difference); the median intra-lab %CV was 19.2% with a range from 4.2% to 82.6%. While our international performance evaluation using serially diluted RNA samples has reinforced the fact that heterogeneity exists among clinical laboratories, it has also demonstrated that performance within a laboratory is overall very consistent. Accordingly, the availability of defined BCR/ABL RNAs may facilitate the validation of all phases of quantitative BCR/ABL analysis and may be extremely useful as a tool for monitoring assay performance. Ongoing analyses of these materials, along with the development of additional control materials, may solidify consensus around their application in routine laboratory testing and possible integration in worldwide efforts to standardize quantitative BCR/ABL testing.
Resumo:
Tutkimuksen tavoite oli selvittää yrityksen web toiminnan rakentamisen vaiheita sekä menestyksen mittaamista. Rakennusprosessia tutkittiin viisiportaisen askelmallin avulla. Mallin askeleet ovat; arviointi, strategian muotoilu, suunnitelma, pohjapiirros ja toteutus. Arviointi- ja toteutusvaiheiden täydentämiseksi sekä erityisesti myös internet toiminnan onnistumisen mittaamisen avuksi internet toiminnan hyödyt (CRM,kommunikointi-, myynti-, ja jakelukanava hyödyt markkinoinnin kannalta) käsiteltiin. Toiminnan menestyksen arvioinnin avuksi esiteltiin myös porrasmalli internet toimintaan. Porrasmalli määrittelee kauppakulissi-, dynaaminen-, transaktio- ja e-businessportaat. Tutkimuksessa löydettiin menestystekijöitä internet toimintojen menestykselle. Nämä tekijät ovat laadukas sisältö, kiinnostavuus, viihdyttävyys, informatiivisuus, ajankohtaisuus, personoitavuus, luottamus, interaktiivisuus, käytettävyys, kätevyys, lojaalisuus, suoriutuminen, responssiivisuus ja käyttäjätiedon kerääminen. Mittarit jaettiin tutkimuksessa aktiivisuus-, käyttäytymis- ja muunnosmittareihin. Lisäksi muita mittareita ja menestysindikaattoreita esiteltiin. Nämä menestyksen elementit ja mittarit koottiin yhteen uudessa internet toimintojen menestyksenarviointimallissa. Tutkielman empiirisessä osuudessa,esitettyjä teorioita peilattiin ABB:n (ABB:n sisällä erityisesti ABB Stotz-Kontakt) web toimintaan. Apuna olivat dokumenttianalyysi sekä haastattelut. Empiirinen osa havainnollisti teoriat käytännössä ja toi ilmi mahdollisuuden teorioiden laajentamiseen. Internet toimintojen rakentamismallia voidaan käyttää myös web toimintojen kehittämiseen ja porrasmalli sopii myös nykyisten internet toimintojen arvioimiseen. Mittareiden soveltaminen käytännössä toi kuitenkin ilmi tarpeen niiden kehittämiseen ja aiheen lisätutkimukseen. Niiden tulisi olla myös aiempaatiiviimmin liitetty kokonaisvaltaisen liiketoiminnan menestyksen mittaamiseen.
Resumo:
Tutkimuksen tavoitteena on selvittää, esiintyykö suomeen sijoittavilla osakerahastoilla menestyksen pysyvyyttä. Tutkimusaineisto koostuu kaikista suomalaisista osakerahastoista, jotka toimivat ajanjaksolla 15.1.1998-13.1.2005. Aineisto on vapaa selviytymisvinoumasta. Suorituskyvyn mittareina käytetään CAPM-alfaa sekä kolmi- ja nelifaktori-alfaa. Empiirisessä osassa osakerahastojen menestyksen pysyvyyttä testataan Spearmanin järjestyskorrelaatiotestillä. Evidenssi menestyksen pysyvyydestä jäi vähäiseksi, vaikkakin sitä esiintyi satunnaisesti kaikilla menestysmittareilla joillakin ranking- ja sijoitusperiodin yhdistelmillä. CAPM-alfalla tarkasteltuna tilastollisesti merkitsevää menestyksen pysyvyyttä esiintyi selvästi useammin kuin muilla menestysmittareilla. Tulokset tukevat viimeaikaisia kansainvälisiä tutkimuksia, joiden mukaan menestyksen pysyvyys riippuu usein mittaustavasta. Menestysmittareina käytettyjen regressiomallien merkitsevyystestit osoittavat multifaktorimallien selittävän osakerahastojen tuottoja CAPM:a paremmin. Lisätyt muuttujat parantavat merkittävästi CAPM:n selitysvoimaa.
Resumo:
PURPOSE: We investigated association of hematological variables with specific fitness performance in elite team-sport players. METHODS: Hemoglobin mass (Hbmass) was measured in 25 elite field hockey players using the optimized (2 min) CO-rebreathing method. Hemoglobin concentration ([Hb]), hematocrit and mean corpuscular hemoglobin concentration (MCHC) were analyzed in venous blood. Fitness performance evaluation included a repeated-sprint ability (RSA) test (8 x 20 m sprints, 20 s of rest) and the Yo-Yo intermittent recovery level 2 (YYIR2). RESULTS: Hbmass was largely correlated (r = 0.62, P<0.01) with YYIR2 total distance covered (YYIR2TD) but not with any RSA-derived parameters (r ranging from -0.06 to -0.32; all P>0.05). [Hb] and MCHC displayed moderate correlations with both YYIR2TD (r = 0.44 and 0.41; both P<0.01) and RSA sprint decrement score (r = -0.41 and -0.44; both P<0.05). YYIR2TD correlated with RSA best and total sprint times (r = -0.46, P<0.05 and -0.60, P<0.01; respectively), but not with RSA sprint decrement score (r = -0.19, P>0.05). CONCLUSION: Hbmass is positively correlated with specific aerobic fitness, but not with RSA, in elite team-sport players. Additionally, the negative relationships between YYIR2 and RSA tests performance imply that different hematological mechanisms may be at play. Overall, these results indicate that these two fitness tests should not be used interchangeably as they reflect different hematological mechanisms.
Resumo:
The objective of this paper was to show the potential additional insight that result from adding greenhouse gas (GHG) emissions to plant performance evaluation criteria, such as effluent quality (EQI) and operational cost (OCI) indices, when evaluating (plant-wide) control/operational strategies in wastewater treatment plants (WWTPs). The proposed GHG evaluation is based on a set of comprehensive dynamic models that estimate the most significant potential on-site and off-site sources of CO2, CH4 and N2O. The study calculates and discusses the changes in EQI, OCI and the emission of GHGs as a consequence of varying the following four process variables: (i) the set point of aeration control in the activated sludge section; (ii) the removal efficiency of total suspended solids (TSS) in the primary clarifier; (iii) the temperature in the anaerobic digester; and (iv) the control of the flow of anaerobic digester supernatants coming from sludge treatment. Based upon the assumptions built into the model structures, simulation results highlight the potential undesirable effects of increased GHG production when carrying out local energy optimization of the aeration system in the activated sludge section and energy recovery from the AD. Although off-site CO2 emissions may decrease, the effect is counterbalanced by increased N2O emissions, especially since N2O has a 300-fold stronger greenhouse effect than CO2. The reported results emphasize the importance and usefulness of using multiple evaluation criteria to compare and evaluate (plant-wide) control strategies in a WWTP for more informed operational decision making
Resumo:
Hydrological models are important tools that have been used in water resource planning and management. Thus, the aim of this work was to calibrate and validate in a daily time scale, the SWAT model (Soil and Water Assessment Tool) to the watershed of the Galo creek , located in Espírito Santo State. To conduct the study we used georeferenced maps of relief, soil type and use, in addition to historical daily time series of basin climate and flow. In modeling were used time series corresponding to the periods Jan 1, 1995 to Dec 31, 2000 and Jan 1, 2001 to Dec 20, 2003 for calibration and validation, respectively. Model performance evaluation was done using the Nash-Sutcliffe coefficient (E NS) and the percentage of bias (P BIAS). SWAT evaluation was also done in the simulation of the following hydrological variables: maximum and minimum annual daily flowsand minimum reference flows, Q90 and Q95, based on mean absolute error. E NS and P BIAS were, respectively, 0.65 and 7.2% and 0.70 and 14.1%, for calibration and validation, indicating a satisfactory performance for the model. SWAT adequately simulated minimum annual daily flow and the reference flows, Q90 and Q95; it was not suitable in the simulation of maximum annual daily flows.
Resumo:
Corporate events as an effective part of marketing communications strategy seem to be underestimated in Finnish companies. In the rest of the Europe and the USA, investments in events are increasing, and their share of the marketing budget is significant. The growth of the industry may be explained by the numerous advantages and opportunities that events provide for attendees, such as face-to-face marketing, enhancing corporate image, building relationships, increasing sales, and gathering information. In order to maximize these benefits and return on investment, specific measurement strategies are required, yet there seems to exist a lack of understanding of how event performance should be perceived or evaluated. To address this research gap, this research attempts to describe the perceptions of and strategies for evaluating corporate event performance in the Finnish events industry. First, corporate events are discussed in terms of definitions and characteristics, typologies, and their role in marketing communications. Second, different theories on evaluating corporate event performance are presented and analyzed. Third, a conceptual model is presented based on the literature review, which serves as a basis for the empirical research conducted as an online questionnaire. The empirical findings are to a great extent in line with the existing literature, suggesting that there remains a lack of understanding corporate event performance evaluation, and challenges arise in determining appropriate measurement procedures for it. Setting clear objectives for events is a significant aspect of the evaluation process, since the outcomes of events are usually evaluated against the preset objectives. The respondent companies utilize many of the individual techniques that were recognized in theory, such as calculating the number of sales leads and delegates. However, some of the measurement tools may require further investments and resources, thus restricting their application especially in smaller companies. In addition, there seems to be a lack of knowledge of the most appropriate methods in different contexts, which take into account the characteristics of the organizing party as well as the size and nature of the event. The lack of inhouse expertise enhances the need for third-party service-providers in solving problems of corporate event measurement.
Resumo:
Au Québec, des réseaux ont été implantés afin de contrer le manque d’intégration des services offerts aux personnes vivant avec un traumatisme cranio-cérébral (TCC). Toutefois, l’évaluation de leur performance est actuellement limitée par l’absence de description et de conceptualisation de leur performance. Le but de cette thèse est de poser les bases préliminaires d’un processus d’évaluation de la performance des réseaux TCC. Nos objectifs sont de 1) décrire les organisations, la nature et la qualité des liens ainsi que la configuration d’un réseau TCC; 2) connaître les perceptions des constituants du réseau quant aux forces, faiblesses, opportunités et menaces propres à cette forme organisationnelle; 3) documenter et comparer les perceptions de répondants provenant de divers types d’organisations quant à l’importance de 16 dimensions du concept de performance pour l’évaluation des réseaux TCC; 4) réconcilier les perceptions différentes afin de proposer une hiérarchisation consensuelle des dimensions de la performance. En utilisant la méthode de l’analyse du réseau social, nous avons décrit un réseau de petite taille, modérément dense et essentiellement organisé autour de quatre organisations fortement centralisées. Les constituants ont décrit leur réseau comme présentant autant de forces que de faiblesses. La majorité des enjeux rapportés étaient relatifs à l’Adaptation du réseau à son environnement et au Maintien des Valeurs. Par ailleurs, les représentants des 46 organisations membre d’un réseau TCC ont perçu les dimensions de la performance relatives à l’Atteinte des buts comme étant plus importantes que celles relatives aux Processus. La Capacité d’attirer la clientèle, la Continuité et la Capacité de s’adapter pour répondre aux besoins des clients étaient les trois dimensions les plus importantes, tandis que la Capacité de s’adapter aux exigences et aux tendances et la Quantité de soins et de services étaient les moins importants. Les groupes TRIAGE ont permis aux constituants de s’entendre sur l’importance accordée à chaque dimension et d’uniformiser leurs différentes perspectives. Bien que plusieurs étapes demeurent à franchir pour actualiser le processus d’évaluation de la performance des réseaux TCC québécois, nos travaux permettent de poser des bases scientifiques solides qui optimisent la pertinence et l’appropriation des résultats pour les étapes ultérieures.
Resumo:
Travail dirigé présenté à la Faculté des sciences infirmières en vue de l’obtention du grade de Maître ès sciences (M.Sc.) en sciences infirmières option administration en sciences infirmières
Resumo:
Queueing system in which arriving customers who find all servers and waiting positions (if any) occupied many retry for service after a period of time are retrial queues or queues with repeated attempts. This study deals with two objectives one is to introduce orbital search in retrial queueing models which allows to minimize the idle time of the server. If the holding costs and cost of using the search of customers will be introduced, the results we obtained can be used for the optimal tuning of the parameters of the search mechanism. The second one is to provide insight of the link between the corresponding retrial queue and the classical queue. At the end we observe that when the search probability Pj = 1 for all j, the model reduces to the classical queue and when Pj = 0 for all j, the model becomes the retrial queue. It discusses the performance evaluation of single-server retrial queue. It was determined by using Poisson process. Then it discuss the structure of the busy period and its analysis interms of Laplace transforms and also provides a direct method of evaluation for the first and second moments of the busy period. Then it discusses the M/ PH/1 retrial queue with disaster to the unit in service and orbital search, and a multi-server retrial queueing model (MAP/M/c) with search of customers from the orbit. MAP is convenient tool to model both renewal and non-renewal arrivals. Finally the present model deals with back and forth movement between classical queue and retrial queue. In this model when orbit size increases, retrial rate also correspondingly increases thereby reducing the idle time of the server between services
Resumo:
In the present scenario of energy demand overtaking energy supply top priority is given for energy conservation programs and policies. Most of the process plants are operated on continuous basis and consumes large quantities of energy. Efficient management of process system can lead to energy savings, improved process efficiency, lesser operating and maintenance cost, and greater environmental safety. Reliability and maintainability of the system are usually considered at the design stage and is dependent on the system configuration. However, with the growing need for energy conservation, most of the existing process systems are either modified or are in a state of modification with a view for improving energy efficiency. Often these modifications result in a change in system configuration there by affecting the system reliability. It is important that system modifications for improving energy efficiency should not be at the cost of reliability. Any new proposal for improving the energy efficiency of the process or equipments should prove itself to be economically feasible for gaining acceptance for implementation. In order to arrive at the economic feasibility of the new proposal, the general trend is to compare the benefits that can be derived over the lifetime as well as the operating and maintenance costs with the investment to be made. Quite often it happens that the reliability aspects (or loss due to unavailability) are not taken into consideration. Plant availability is a critical factor for the economic performance evaluation of any process plant.The focus of the present work is to study the effect of system modification for improving energy efficiency on system reliability. A generalized model for the valuation of process system incorporating reliability is developed, which is used as a tool for the analysis. It can provide an awareness of the potential performance improvements of the process system and can be used to arrive at the change in process system value resulting from system modification. The model also arrives at the pay back of the modified system by taking reliability aspects also into consideration. It is also used to study the effect of various operating parameters on system value. The concept of breakeven availability is introduced and an algorithm for allocation of component reliabilities of the modified process system based on the breakeven system availability is also developed. The model was applied to various industrial situations.
Resumo:
Learning Disability (LD) is a general term that describes specific kinds of learning problems. It is a neurological condition that affects a child's brain and impairs his ability to carry out one or many specific tasks. The learning disabled children are neither slow nor mentally retarded. This disorder can make it problematic for a child to learn as quickly or in the same way as some child who isn't affected by a learning disability. An affected child can have normal or above average intelligence. They may have difficulty paying attention, with reading or letter recognition, or with mathematics. It does not mean that children who have learning disabilities are less intelligent. In fact, many children who have learning disabilities are more intelligent than an average child. Learning disabilities vary from child to child. One child with LD may not have the same kind of learning problems as another child with LD. There is no cure for learning disabilities and they are life-long. However, children with LD can be high achievers and can be taught ways to get around the learning disability. In this research work, data mining using machine learning techniques are used to analyze the symptoms of LD, establish interrelationships between them and evaluate the relative importance of these symptoms. To increase the diagnostic accuracy of learning disability prediction, a knowledge based tool based on statistical machine learning or data mining techniques, with high accuracy,according to the knowledge obtained from the clinical information, is proposed. The basic idea of the developed knowledge based tool is to increase the accuracy of the learning disability assessment and reduce the time used for the same. Different statistical machine learning techniques in data mining are used in the study. Identifying the important parameters of LD prediction using the data mining techniques, identifying the hidden relationship between the symptoms of LD and estimating the relative significance of each symptoms of LD are also the parts of the objectives of this research work. The developed tool has many advantages compared to the traditional methods of using check lists in determination of learning disabilities. For improving the performance of various classifiers, we developed some preprocessing methods for the LD prediction system. A new system based on fuzzy and rough set models are also developed for LD prediction. Here also the importance of pre-processing is studied. A Graphical User Interface (GUI) is designed for developing an integrated knowledge based tool for prediction of LD as well as its degree. The designed tool stores the details of the children in the student database and retrieves their LD report as and when required. The present study undoubtedly proves the effectiveness of the tool developed based on various machine learning techniques. It also identifies the important parameters of LD and accurately predicts the learning disability in school age children. This thesis makes several major contributions in technical, general and social areas. The results are found very beneficial to the parents, teachers and the institutions. They are able to diagnose the child’s problem at an early stage and can go for the proper treatments/counseling at the correct time so as to avoid the academic and social losses.
Resumo:
The country has witnessed tremendous increase in the vehicle population and increased axle loading pattern during the last decade, leaving its road network overstressed and leading to premature failure. The type of deterioration present in the pavement should be considered for determining whether it has a functional or structural deficiency, so that appropriate overlay type and design can be developed. Structural failure arises from the conditions that adversely affect the load carrying capability of the pavement structure. Inadequate thickness, cracking, distortion and disintegration cause structural deficiency. Functional deficiency arises when the pavement does not provide a smooth riding surface and comfort to the user. This can be due to poor surface friction and texture, hydro planning and splash from wheel path, rutting and excess surface distortion such as potholes, corrugation, faulting, blow up, settlement, heaves etc. Functional condition determines the level of service provided by the facility to its users at a particular time and also the Vehicle Operating Costs (VOC), thus influencing the national economy. Prediction of the pavement deterioration is helpful to assess the remaining effective service life (RSL) of the pavement structure on the basis of reduction in performance levels, and apply various alternative designs and rehabilitation strategies with a long range funding requirement for pavement preservation. In addition, they can predict the impact of treatment on the condition of the sections. The infrastructure prediction models can thus be classified into four groups, namely primary response models, structural performance models, functional performance models and damage models. The factors affecting the deterioration of the roads are very complex in nature and vary from place to place. Hence there is need to have a thorough study of the deterioration mechanism under varied climatic zones and soil conditions before arriving at a definite strategy of road improvement. Realizing the need for a detailed study involving all types of roads in the state with varying traffic and soil conditions, the present study has been attempted. This study attempts to identify the parameters that affect the performance of roads and to develop performance models suitable to Kerala conditions. A critical review of the various factors that contribute to the pavement performance has been presented based on the data collected from selected road stretches and also from five corporations of Kerala. These roads represent the urban conditions as well as National Highways, State Highways and Major District Roads in the sub urban and rural conditions. This research work is a pursuit towards a study of the road condition of Kerala with respect to varying soil, traffic and climatic conditions, periodic performance evaluation of selected roads of representative types and development of distress prediction models for roads of Kerala. In order to achieve this aim, the study is focused into 2 parts. The first part deals with the study of the pavement condition and subgrade soil properties of urban roads distributed in 5 Corporations of Kerala; namely Thiruvananthapuram, Kollam, Kochi, Thrissur and Kozhikode. From selected 44 roads, 68 homogeneous sections were studied. The data collected on the functional and structural condition of the surface include pavement distress in terms of cracks, potholes, rutting, raveling and pothole patching. The structural strength of the pavement was measured as rebound deflection using Benkelman Beam deflection studies. In order to collect the details of the pavement layers and find out the subgrade soil properties, trial pits were dug and the in-situ field density was found using the Sand Replacement Method. Laboratory investigations were carried out to find out the subgrade soil properties, soil classification, Atterberg limits, Optimum Moisture Content, Field Moisture Content and 4 days soaked CBR. The relative compaction in the field was also determined. The traffic details were also collected by conducting traffic volume count survey and axle load survey. From the data thus collected, the strength of the pavement was calculated which is a function of the layer coefficient and thickness and is represented as Structural Number (SN). This was further related to the CBR value of the soil and the Modified Structural Number (MSN) was found out. The condition of the pavement was represented in terms of the Pavement Condition Index (PCI) which is a function of the distress of the surface at the time of the investigation and calculated in the present study using deduct value method developed by U S Army Corps of Engineers. The influence of subgrade soil type and pavement condition on the relationship between MSN and rebound deflection was studied using appropriate plots for predominant types of soil and for classified value of Pavement Condition Index. The relationship will be helpful for practicing engineers to design the overlay thickness required for the pavement, without conducting the BBD test. Regression analysis using SPSS was done with various trials to find out the best fit relationship between the rebound deflection and CBR, and other soil properties for Gravel, Sand, Silt & Clay fractions. The second part of the study deals with periodic performance evaluation of selected road stretches representing National Highway (NH), State Highway (SH) and Major District Road (MDR), located in different geographical conditions and with varying traffic. 8 road sections divided into 15 homogeneous sections were selected for the study and 6 sets of continuous periodic data were collected. The periodic data collected include the functional and structural condition in terms of distress (pothole, pothole patch, cracks, rutting and raveling), skid resistance using a portable skid resistance pendulum, surface unevenness using Bump Integrator, texture depth using sand patch method and rebound deflection using Benkelman Beam. Baseline data of the study stretches were collected as one time data. Pavement history was obtained as secondary data. Pavement drainage characteristics were collected in terms of camber or cross slope using camber board (slope meter) for the carriage way and shoulders, availability of longitudinal side drain, presence of valley, terrain condition, soil moisture content, water table data, High Flood Level, rainfall data, land use and cross slope of the adjoining land. These data were used for finding out the drainage condition of the study stretches. Traffic studies were conducted, including classified volume count and axle load studies. From the field data thus collected, the progression of each parameter was plotted for all the study roads; and validated for their accuracy. Structural Number (SN) and Modified Structural Number (MSN) were calculated for the study stretches. Progression of the deflection, distress, unevenness, skid resistance and macro texture of the study roads were evaluated. Since the deterioration of the pavement is a complex phenomena contributed by all the above factors, pavement deterioration models were developed as non linear regression models, using SPSS with the periodic data collected for all the above road stretches. General models were developed for cracking progression, raveling progression, pothole progression and roughness progression using SPSS. A model for construction quality was also developed. Calibration of HDM–4 pavement deterioration models for local conditions was done using the data for Cracking, Raveling, Pothole and Roughness. Validation was done using the data collected in 2013. The application of HDM-4 to compare different maintenance and rehabilitation options were studied considering the deterioration parameters like cracking, pothole and raveling. The alternatives considered for analysis were base alternative with crack sealing and patching, overlay with 40 mm BC using ordinary bitumen, overlay with 40 mm BC using Natural Rubber Modified Bitumen and an overlay of Ultra Thin White Topping. Economic analysis of these options was done considering the Life Cycle Cost (LCC). The average speed that can be obtained by applying these options were also compared. The results were in favour of Ultra Thin White Topping over flexible pavements. Hence, Design Charts were also plotted for estimation of maximum wheel load stresses for different slab thickness under different soil conditions. The design charts showed the maximum stress for a particular slab thickness and different soil conditions incorporating different k values. These charts can be handy for a design engineer. Fuzzy rule based models developed for site specific conditions were compared with regression models developed using SPSS. The Riding Comfort Index (RCI) was calculated and correlated with unevenness to develop a relationship. Relationships were developed between Skid Number and Macro Texture of the pavement. The effort made through this research work will be helpful to highway engineers in understanding the behaviour of flexible pavements in Kerala conditions and for arriving at suitable maintenance and rehabilitation strategies. Key Words: Flexible Pavements – Performance Evaluation – Urban Roads – NH – SH and other roads – Performance Models – Deflection – Riding Comfort Index – Skid Resistance – Texture Depth – Unevenness – Ultra Thin White Topping
Resumo:
Las organizaciones en la actualidad deben encontrar diferentes maneras de sobrevivir en un tiempo de rápida transformación. Uno de los mecanismos usados por las empresas para adaptarse a los cambios organizacionales son los sistemas de control de gestión, que a su vez permiten a las organizaciones hacer un seguimiento a sus procesos, para que la adaptabilidad sea efectiva. Otra variable importante para la adaptación es el aprendizaje organizacional siendo el proceso mediante el cual las organizaciones se adaptan a los cambios del entorno, tanto interno como externo de la compañía. Dado lo anterior, este proyecto se basa en la extracción de documentación soporte valido, que permita explorar las interacciones entre estos dos campos, los sistemas de control de gestión y el aprendizaje organizacional, además, analizar el impacto de estas interacciones en la perdurabilidad organizacional.