970 resultados para applying performance
Resumo:
The country has witnessed tremendous increase in the vehicle population and increased axle loading pattern during the last decade, leaving its road network overstressed and leading to premature failure. The type of deterioration present in the pavement should be considered for determining whether it has a functional or structural deficiency, so that appropriate overlay type and design can be developed. Structural failure arises from the conditions that adversely affect the load carrying capability of the pavement structure. Inadequate thickness, cracking, distortion and disintegration cause structural deficiency. Functional deficiency arises when the pavement does not provide a smooth riding surface and comfort to the user. This can be due to poor surface friction and texture, hydro planning and splash from wheel path, rutting and excess surface distortion such as potholes, corrugation, faulting, blow up, settlement, heaves etc. Functional condition determines the level of service provided by the facility to its users at a particular time and also the Vehicle Operating Costs (VOC), thus influencing the national economy. Prediction of the pavement deterioration is helpful to assess the remaining effective service life (RSL) of the pavement structure on the basis of reduction in performance levels, and apply various alternative designs and rehabilitation strategies with a long range funding requirement for pavement preservation. In addition, they can predict the impact of treatment on the condition of the sections. The infrastructure prediction models can thus be classified into four groups, namely primary response models, structural performance models, functional performance models and damage models. The factors affecting the deterioration of the roads are very complex in nature and vary from place to place. Hence there is need to have a thorough study of the deterioration mechanism under varied climatic zones and soil conditions before arriving at a definite strategy of road improvement. Realizing the need for a detailed study involving all types of roads in the state with varying traffic and soil conditions, the present study has been attempted. This study attempts to identify the parameters that affect the performance of roads and to develop performance models suitable to Kerala conditions. A critical review of the various factors that contribute to the pavement performance has been presented based on the data collected from selected road stretches and also from five corporations of Kerala. These roads represent the urban conditions as well as National Highways, State Highways and Major District Roads in the sub urban and rural conditions. This research work is a pursuit towards a study of the road condition of Kerala with respect to varying soil, traffic and climatic conditions, periodic performance evaluation of selected roads of representative types and development of distress prediction models for roads of Kerala. In order to achieve this aim, the study is focused into 2 parts. The first part deals with the study of the pavement condition and subgrade soil properties of urban roads distributed in 5 Corporations of Kerala; namely Thiruvananthapuram, Kollam, Kochi, Thrissur and Kozhikode. From selected 44 roads, 68 homogeneous sections were studied. The data collected on the functional and structural condition of the surface include pavement distress in terms of cracks, potholes, rutting, raveling and pothole patching. The structural strength of the pavement was measured as rebound deflection using Benkelman Beam deflection studies. In order to collect the details of the pavement layers and find out the subgrade soil properties, trial pits were dug and the in-situ field density was found using the Sand Replacement Method. Laboratory investigations were carried out to find out the subgrade soil properties, soil classification, Atterberg limits, Optimum Moisture Content, Field Moisture Content and 4 days soaked CBR. The relative compaction in the field was also determined. The traffic details were also collected by conducting traffic volume count survey and axle load survey. From the data thus collected, the strength of the pavement was calculated which is a function of the layer coefficient and thickness and is represented as Structural Number (SN). This was further related to the CBR value of the soil and the Modified Structural Number (MSN) was found out. The condition of the pavement was represented in terms of the Pavement Condition Index (PCI) which is a function of the distress of the surface at the time of the investigation and calculated in the present study using deduct value method developed by U S Army Corps of Engineers. The influence of subgrade soil type and pavement condition on the relationship between MSN and rebound deflection was studied using appropriate plots for predominant types of soil and for classified value of Pavement Condition Index. The relationship will be helpful for practicing engineers to design the overlay thickness required for the pavement, without conducting the BBD test. Regression analysis using SPSS was done with various trials to find out the best fit relationship between the rebound deflection and CBR, and other soil properties for Gravel, Sand, Silt & Clay fractions. The second part of the study deals with periodic performance evaluation of selected road stretches representing National Highway (NH), State Highway (SH) and Major District Road (MDR), located in different geographical conditions and with varying traffic. 8 road sections divided into 15 homogeneous sections were selected for the study and 6 sets of continuous periodic data were collected. The periodic data collected include the functional and structural condition in terms of distress (pothole, pothole patch, cracks, rutting and raveling), skid resistance using a portable skid resistance pendulum, surface unevenness using Bump Integrator, texture depth using sand patch method and rebound deflection using Benkelman Beam. Baseline data of the study stretches were collected as one time data. Pavement history was obtained as secondary data. Pavement drainage characteristics were collected in terms of camber or cross slope using camber board (slope meter) for the carriage way and shoulders, availability of longitudinal side drain, presence of valley, terrain condition, soil moisture content, water table data, High Flood Level, rainfall data, land use and cross slope of the adjoining land. These data were used for finding out the drainage condition of the study stretches. Traffic studies were conducted, including classified volume count and axle load studies. From the field data thus collected, the progression of each parameter was plotted for all the study roads; and validated for their accuracy. Structural Number (SN) and Modified Structural Number (MSN) were calculated for the study stretches. Progression of the deflection, distress, unevenness, skid resistance and macro texture of the study roads were evaluated. Since the deterioration of the pavement is a complex phenomena contributed by all the above factors, pavement deterioration models were developed as non linear regression models, using SPSS with the periodic data collected for all the above road stretches. General models were developed for cracking progression, raveling progression, pothole progression and roughness progression using SPSS. A model for construction quality was also developed. Calibration of HDM–4 pavement deterioration models for local conditions was done using the data for Cracking, Raveling, Pothole and Roughness. Validation was done using the data collected in 2013. The application of HDM-4 to compare different maintenance and rehabilitation options were studied considering the deterioration parameters like cracking, pothole and raveling. The alternatives considered for analysis were base alternative with crack sealing and patching, overlay with 40 mm BC using ordinary bitumen, overlay with 40 mm BC using Natural Rubber Modified Bitumen and an overlay of Ultra Thin White Topping. Economic analysis of these options was done considering the Life Cycle Cost (LCC). The average speed that can be obtained by applying these options were also compared. The results were in favour of Ultra Thin White Topping over flexible pavements. Hence, Design Charts were also plotted for estimation of maximum wheel load stresses for different slab thickness under different soil conditions. The design charts showed the maximum stress for a particular slab thickness and different soil conditions incorporating different k values. These charts can be handy for a design engineer. Fuzzy rule based models developed for site specific conditions were compared with regression models developed using SPSS. The Riding Comfort Index (RCI) was calculated and correlated with unevenness to develop a relationship. Relationships were developed between Skid Number and Macro Texture of the pavement. The effort made through this research work will be helpful to highway engineers in understanding the behaviour of flexible pavements in Kerala conditions and for arriving at suitable maintenance and rehabilitation strategies. Key Words: Flexible Pavements – Performance Evaluation – Urban Roads – NH – SH and other roads – Performance Models – Deflection – Riding Comfort Index – Skid Resistance – Texture Depth – Unevenness – Ultra Thin White Topping
Resumo:
The results from applying a sensor fusion process to an adaptive controller used to balance all inverted pendulum axe presented. The goal of the sensor fusion process was to replace some of the four mechanical measurements, which are known to be sufficient inputs for a linear state feedback controller to balance the system, with optic flow variables. Results from research into the psychology of the sense of balance in humans were the motivation for the investigation of this new type of controller input. The simulated model of the inverted pendulum and the virtual reality environments used to provide the optical input are described. The successful introduction of optical information is found to require the preservation of at least two of the traditional input types and entail increased training till-le for the adaptive controller and reduced performance (measured as the time the pendulum remains upright)
Resumo:
This study presents the findings of applying a Discrete Demand Side Control (DDSC) approach to the space heating of two case study buildings. High and low tolerance scenarios are implemented on the space heating controller to assess the impact of DDSC upon buildings with different thermal capacitances, light-weight and heavy-weight construction. Space heating is provided by an electric heat pump powered from a wind turbine, with a back-up electrical network connection in the event of insufficient wind being available when a demand occurs. Findings highlight that thermal comfort is maintained within an acceptable range while the DDSC controller maintains the demand/supply balance. Whilst it is noted that energy demand increases slightly, as this is mostly supplied from the wind turbine, this is of little significance and hence a reduction in operating costs and carbon emissions is still attained.
Resumo:
Automatically extracting interesting objects from videos is a very challenging task and is applicable to many research areas such robotics, medical imaging, content based indexing and visual surveillance. Automated visual surveillance is a major research area in computational vision and a commonly applied technique in an attempt to extract objects of interest is that of motion segmentation. Motion segmentation relies on the temporal changes that occur in video sequences to detect objects, but as a technique it presents many challenges that researchers have yet to surmount. Changes in real-time video sequences not only include interesting objects, environmental conditions such as wind, cloud cover, rain and snow may be present, in addition to rapid lighting changes, poor footage quality, moving shadows and reflections. The list provides only a sample of the challenges present. This thesis explores the use of motion segmentation as part of a computational vision system and provides solutions for a practical, generic approach with robust performance, using current neuro-biological, physiological and psychological research in primate vision as inspiration.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
A thorough study of the thermal performance of multipass parallel cross-flow and counter-cross-flow heat exchangers has been carried out by applying a new numerical procedure. According to this procedure, the heat exchanger is discretized into small elements following the tube-side fluid circuits. Each element is itself a one-pass mixed-unmixed cross-flow heat exchanger. Simulated results have been validated through comparisons to results from analytical solutions for one- to four-pass, parallel cross-flow and counter-cross-flow arrangements. Very accurate results have been obtained over wide ranges of NTU (number of transfer units) and C* (heat capacity rate ratio) values. New effectiveness data for the aforementioned configurations and a higher number of tube passes is presented along with data for a complex flow configuration proposed elsewhere. The proposed procedure constitutes a useful research tool both for theoretical and experimental studies of cross-flow heat exchangers thermal performance.
Resumo:
OBJETIVE: This study aimed to assess the practices of pharmacists in Hospital Care. Method - we interviewed 20 pharmacists from the Pharmacy Division by applying a structured instrument, in September 2005. This instrument addressed aspects related to the main activities at the Hospital Pharmacy, which were assessed according to indicators organized into five areas: sector management, hospital pharmacotechniques, committee activities, information and pharmacotherapeutic follow-up, as well as teaching and research activities.RESULTS: the Pharmacy Division considered all structural aspects under analysis as essential for the good development and application of its services. We found that some essential services, such as the Medication Information Service and Pharmacotherapeutic Follow-up, were absent. Pharmacist professionals were dissatisfied about human resource and physical structure dimensioning, and they presented as not very active in terms of Pharmaceutical Care.CONCLUSION: Results indicate that care is still centered on the drug, with few clinical activities. We suggest reformulations in service management, particularly in the management of pharmacists.
Resumo:
An accurate estimate of machining time is very important for predicting delivery time, manufacturing costs, and also to help production process planning. Most commercial CAM software systems estimate the machining time in milling operations simply by dividing the entire tool path length by the programmed feed rate. This time estimate differs drastically from the real process time because the feed rate is not always constant due to machine and computer numerical controlled (CNC) limitations. This study presents a practical mechanistic method for milling time estimation when machining free-form geometries. The method considers a variable called machine response time (MRT) which characterizes the real CNC machine's capacity to move in high feed rates in free-form geometries. MRT is a global performance feature which can be obtained for any type of CNC machine configuration by carrying out a simple test. For validating the methodology, a workpiece was used to generate NC programs for five different types of CNC machines. A practical industrial case study was also carried out to validate the method. The results indicated that MRT, and consequently, the real machining time, depends on the CNC machine's potential: furthermore, the greater MRT, the larger the difference between predicted milling time and real milling time. The proposed method achieved an error range from 0.3% to 12% of the real machining time, whereas the CAM estimation achieved from 211% to 1244% error. The MRT-based process is also suggested as an instrument for helping in machine tool benchmarking.
Resumo:
Motivated by rising drilling operation costs, the oil industry has shown a trend toward real-time measurements and control. In this scenario, drilling control becomes a challenging problem for the industry, especially due to the difficulty associated with parameters modeling. One of the drillbit performance evaluators, the Rate Of Penetration (ROP), has been used as a drilling control parameter. However, relationships between operational variables affecting the ROP are complex and not easily modeled. This work presents a neuro-genetic adaptive controller to treat this problem. It is based on an auto-regressive with extra input signals, or ARX model and on a Genetic Algorithm (GA) to control the ROP. © [2006] IEEE.
Resumo:
The objective of this article is to apply the Design of Experiments technique along with the Discrete Events Simulation technique in an automotive process. The benefits of the design of experiments in simulation include the possibility to improve the performance in the simulation process, avoiding trial and error to seek solutions. The methodology of the conjoint use of Design of Experiments and Computer Simulation is presented to assess the effects of the variables and its interactions involved in the process. In this paper, the efficacy of the use of process mapping and design of experiments on the phases of conception and analysis are confirmed.
Resumo:
This work encompasses the direct electrodeposition of polypyrrole nanowires onto Au substrates using different electrochemical techniques: normal pulse voltammetry (NPV) and constant potential method with the aim in applying these films for the first time in ammonia sensing in solution. The performance of these nanowire-based sensors are compared and evaluated in terms of: film morphology (analyzed with scanning electron microscopy); their sensitivity towards ammonia; electrochemical and contact angle measurements. For nanowires prepared by NPV, the sensitivity towards ammonia increases with increasing amount of electrodeposited polypyrrole, as expected due to the role of polypyrrole as electrochemical transducer for ammonia oxidation. On the other hand, nanowires prepared potentiostatically displayed an unexpected opposite behavior, attributed to the lower conductivity of longer polypyrrole nanowires obtained through this technique. These results evidenced that the analytical and physico-chemical features of nanostructured sensors can differ greatly from those of their conventional bulky analogous. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
BECTS represents the vast majority of childhood focal epilepsy. Owing to the age peculiarity of children who suffer from this disease, i.e., school-going age of between 6 and 9 years, the condition is often referred to as a school disorder by parents and teachers. Objective: The aim of this study was to evaluate the academic performance of children with BED'S, according to the clinical and electroencephalographic ILAE criteria, and compare the results of neuropsychological tests of language and attention to the frequency of epileptic discharges. Methods: The performances of 40 school children with BED'S were evaluated by applying a school performance test (SBT), neuropsychological tests (WISC and Trail-Making), and language tests (Illinois Test Psycholinguistic Abilities - ITPA - and Staggered Spondaic Word - SSW). The same tests were applied in the control group. Results: Children with BED'S, when compared to those in the control group, showed lower scores in academic performance (SPT), digits and similarities subtests of WISC, auditory processing subtest of SSW, and ITPA - representational and automatic level. The study showed that epileptic discharges did not influence the results. Conclusion: Children with BED'S scored significantly lower scores in tests on academic performance, when compared with those in the control group probably due to executive dysfunction. (C) 2011 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.
Resumo:
Aufbau einer kontinuierlichen, mehrdimensionalen Hochleistungs-flüssigchromatographie-Anlage für die Trennung von Proteinen und Peptiden mit integrierter größenselektiver ProbenfraktionierungEs wurde eine mehrdimensionale HPLC-Trennmethode für Proteine und Peptide mit einem Molekulargewicht von <15 kDa entwickelt.Im ersten Schritt werden die Zielanalyte von höhermolekularen sowie nicht ionischen Bestandteilen mit Hilfe von 'Restricted Access Materialien' (RAM) mit Ionenaustauscher-Funktionalität getrennt. Anschließend werden die Proteine auf einer analytischen Ionenaustauscher-Säule sowie auf Reversed-Phase-Säulen getrennt. Zur Vermeidung von Probenverlusten wurde ein kontinuierlich arbeitendes, voll automatisiertes System auf Basis unterschiedlicher Trenngeschwindigkeiten und vier parallelen RP-Säulen aufgebaut.Es werden jeweils zwei RP-Säulen gleichzeitig, jedoch mit zeitlich versetztem Beginn eluiert, um durch flache Gradienten ausreichende Trennleistungen zu erhalten. Während die dritte Säule regeneriert wird, erfolgt das Beladen der vierte Säule durch Anreicherung der Proteine und Peptide am Säulenkopf. Während der Gesamtanalysenzeit von 96 Minuten werden in Intervallen von 4 Minuten Fraktionen aus der 1. Dimension auf die RP-Säulen überführt und innerhalb von 8 Minuten getrennt, wobei 24 RP-Chromatogramme resultieren.Als Testsubstanzen wurden u.a. Standardproteine, Proteine und Peptide aus humanem Hämofiltrat sowie aus Lungenfibroblast-Zellkulturüberständen eingesetzt. Weiterhin wurden Fraktionen gesammelt und mittels MALDI-TOF Massenspektrometrie untersucht. Bei einer Injektion wurden in den 24 RP-Chromatogrammen mehr als 1000 Peaks aufgelöst. Der theoretische Wert der Peakkapazität liegt bei ungefähr 3000.
Resumo:
The technology of partial virtualization is a revolutionary approach to the world of virtualization. It lies directly in-between full system virtual machines (like QEMU or XEN) and application-related virtual machines (like the JVM or the CLR). The ViewOS project is the flagship of such technique, developed by the Virtual Square laboratory, created to provide an abstract view of the underlying system resources on a per-process basis and work against the principle of the Global View Assumption. Virtual Square provides several different methods to achieve partial virtualization within the ViewOS system, both at user and kernel levels. Each of these approaches have their own advantages and shortcomings. This paper provides an analysis of the different virtualization methods and problems related to both the generic and partial virtualization worlds. This paper is the result of an in-depth study and research for a new technology to be employed to provide partial virtualization based on ELF dynamic binaries. It starts with a mild analysis of currently available virtualization alternatives and then goes on describing the ViewOS system, highlighting its current shortcomings. The vloader project is then proposed as a possible solution to some of these inconveniences with a working proof of concept and examples to outline the potential of such new virtualization technique. By injecting specific code and libraries in the middle of the binary loading mechanism provided by the ELF standard, the vloader project can promote a streamlined and simplified approach to trace system calls. With the advantages outlined in the following paper, this method presents better performance and portability compared to the currently available ViewOS implementations. Furthermore, some of itsdisadvantages are also discussed, along with their possible solutions.
Resumo:
After almost 10 years from “The Free Lunch Is Over” article, where the need to parallelize programs started to be a real and mainstream issue, a lot of stuffs did happened: • Processor manufacturers are reaching the physical limits with most of their approaches to boosting CPU performance, and are instead turning to hyperthreading and multicore architectures; • Applications are increasingly need to support concurrency; • Programming languages and systems are increasingly forced to deal well with concurrency. This thesis is an attempt to propose an overview of a paradigm that aims to properly abstract the problem of propagating data changes: Reactive Programming (RP). This paradigm proposes an asynchronous non-blocking approach to concurrency and computations, abstracting from the low-level concurrency mechanisms.