907 resultados para Metallurgical Engineer


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Les protéines sont les produits finaux de la machinerie génétique. Elles jouent des rôles essentiels dans la définition de la structure, de l'intégrité et de la dynamique de la cellule afin de promouvoir les diverses transformations chimiques requises dans le métabolisme et dans la transmission des signaux biochimique. Nous savons que la doctrine centrale de la biologie moléculaire: un gène = un ARN messager = une protéine, est une simplification grossière du système biologique. En effet, plusieurs ARN messagers peuvent provenir d’un seul gène grâce à l’épissage alternatif. De plus, une protéine peut adopter plusieurs fonctions au courant de sa vie selon son état de modification post-traductionelle, sa conformation et son interaction avec d’autres protéines. La formation de complexes protéiques peut, en elle-même, être déterminée par l’état de modifications des protéines influencées par le contexte génétique, les compartiments subcellulaires, les conditions environmentales ou être intrinsèque à la croissance et la division cellulaire. Les complexes protéiques impliqués dans la régulation du cycle cellulaire sont particulièrement difficiles à disséquer car ils ne se forment qu’au cours de phases spécifiques du cycle cellulaire, ils sont fortement régulés par les modifications post-traductionnelles et peuvent se produire dans tous les compartiments subcellulaires. À ce jour, aucune méthode générale n’a été développée pour permettre une dissection fine de ces complexes macromoléculaires. L'objectif de cette thèse est d'établir et de démontrer une nouvelle stratégie pour disséquer les complexes protéines formés lors du cycle cellulaire de la levure Saccharomyces cerevisiae (S. cerevisiae). Dans cette thèse, je décris le développement et l'optimisation d'une stratégie simple de sélection basée sur un essai de complémentation de fragments protéiques en utilisant la cytosine déaminase de la levure comme sonde (PCA OyCD). En outre, je décris une série d'études de validation du PCA OyCD afin de l’utiliser pour disséquer les mécanismes d'activation des facteurs de transcription et des interactions protéine-protéines (IPPs) entre les régulateurs du cycle cellulaire. Une caractéristique clé du PCA OyCD est qu'il peut être utilisé pour détecter à la fois la formation et la dissociation des IPPs et émettre un signal détectable (la croissance des cellules) pour les deux types de sélections. J'ai appliqué le PCA OyCD pour disséquer les interactions entre SBF et MBF, deux facteurs de transcription clés régulant la transition de la phase G1 à la phase S. SBF et MBF sont deux facteurs de transcription hétérodimériques composés de deux sous-unités : une protéine qui peut lier directement l’ADN (Swi4 ou Mbp1, respectivement) et une protéine commune contenant un domain d’activation de la transcription appelée Swi6. J'ai appliqué le PCA OyCD afin de générer un mutant de Swi6 qui restreint ses activités transcriptionnelles à SBF, abolissant l’activité MBF. Nous avons isolé des souches portant des mutations dans le domaine C-terminal de Swi6, préalablement identifié comme responsable dans la formation de l’interaction avec Swi4 et Mbp1, et également important pour les activités de SBF et MBF. Nos résultats appuient un modèle où Swi6 subit un changement conformationnel lors de la liaison à Swi4 ou Mbp1. De plus, ce mutant de Swi6 a été utilisé pour disséquer le mécanisme de régulation de l’entrée de la cellule dans un nouveau cycle de division cellulaire appelé « START ». Nous avons constaté que le répresseur de SBF et MBF nommé Whi5 se lie directement au domaine C-terminal de Swi6. Finalement, j'ai appliqué le PCA OyCD afin de disséquer les complexes protéiques de la kinase cycline-dépendante de la levure nommé Cdk1. Cdk1 est la kinase essentielle qui régule la progression du cycle cellulaire et peut phosphoryler un grand nombre de substrats différents en s'associant à l'une des neuf protéines cycline régulatrice (Cln1-3, Clb1-6). Je décris une stratégie à haut débit, voir à une échelle génomique, visant à identifier les partenaires d'interaction de Cdk1 et d’y associer la cycline appropriée(s) requise(s) à l’observation d’une interaction en utilisant le PCA OyCD et des souches délétées pour chacune des cyclines. Mes résultats nous permettent d’identifier la phase(s) du cycle cellulaire où Cdk1 peut phosphoryler un substrat particulier et la fonction potentielle ou connue de Cdk1 pendant cette phase. Par exemple, nous avons identifié que l’interaction entre Cdk1 et la γ-tubuline (Tub4) est dépendante de Clb3. Ce résultat est conforme au rôle de Tub4 dans la nucléation et la croissance des faisceaux mitotiques émanant des centromères. Cette stratégie peut également être appliquée à l’étude d'autres IPPs qui sont contrôlées par des sous-unités régulatrices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L’objet de la présente étude est le développement, l’application et la diffusion de la technologie associée à divers types d’alliages de cuivre, en particulier l’alliage du plomb-bronze, en Grèce ancienne, dans ses colonies, ainsi qu’en Étrurie. Le plomb-bronze est un mélange de diverses proportions d’étain, de cuivre et de plomb. Le consensus général chez les archéométallurgistes est que le plomb-bronze n’était pas communément utilisé en Grèce avant la période hellénistique; par conséquent, cet alliage a reçu très peu d’attention dans les documents d’archéologie. Cependant, les analyses métallographiques ont prouvé que les objets composés de plomb ajouté au bronze ont connu une distribution étendue. Ces analyses ont aussi permis de différencier la composition des alliages utilisés dans la fabrication de divers types de bronzes, une preuve tangible que les métallurgistes faisaient la distinction entre les propriétés du bronze d’étain et celles du plomb-bronze. La connaissance de leurs différentes caractéristiques de travail permettait aux travailleurs du bronze de choisir, dans bien des cas, l’alliage approprié pour une utilisation particulière. L’influence des pratiques métallurgiques du Proche-Orient a produit des variations tant dans les formes artistiques que dans les compositions des alliages de bronze grecs durant les périodes géométrique tardive et orientalisante. L’utilisation du plomb-bronze dans des types particuliers d’objets coulés montre une tendance à la hausse à partir de la période orientalisante, culminant dans la période hellénistique tardive, lorsque le bronze à teneur élevée en plomb est devenu un alliage commun. La présente étude analyse les données métallographiques de la catégorie des objets coulés en bronze et en plomb-bronze. Elle démontre que, bien que l’utilisation du plomb-bronze n’était pas aussi commune que celle du bronze d’étain, il s’agissait néanmoins d’un mélange important d’anciennes pratiques métallurgiques. Les ères couvertes sont comprises entre les périodes géométrique et hellénistique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dans la présente recherche, nous nous sommes penchés sur le processus du transfert intra-organisationnel de connaissances au sein d’entreprises multinationales (EM). Partant du triple constat suivant : les connaissances organisationnelles constituent un avantage stratégique (Barney, 1991 ; Bartlett et Ghoshal, 1998), les transferts intra-organisationnels constituent la raison d’être des EM (Gupta et Govindarajan, 2000), lesquelles ont accès à un vaste bassin de connaissances disséminées à travers le monde par le biais de leurs filiales et les mécanismes organisationnels internes sont plus efficaces que ceux du marché (Williamson, 1987 ; Casson, 1976) pour transférer des connaissances entre unités organisationnelles; nous nous sommes intéressés aux facteurs pouvant affecter l’efficacité de ce processus de transfert. Ayant identifié, lors de notre revue des écrits théoriques, une multitude d’approches permettant d’appréhender ce phénomène, nous proposons, dans notre recherche, un modèle théorique intégrant les trois étapes propres au processus de transfert, soit : la détermination des connaissances à transférer, la sélection des mécanismes de transfert appropriés et, finalement, l’évaluation, d’une part, de l’efficacité des transferts et, d’autre part, de l’ensemble des facteurs contextuels ayant un impact sur l’efficacité de ce processus. Sur le plan théorique, cette recherche oppose deux courants dominant ce champ disciplinaire. L’approche stratégique, exprimée par la théorie des ressources, met l’accent sur l’importance prépondérante des facteurs organisationnels internes sur l’efficacité de toute action organisationnelle (Bartlett et Ghoshal, 1998 ; Barney, 1991). Cette approche s’oppose au courant institutionnel, lequel considère plutôt que les choix et les actions organisationnels sont surtout conditionnés par les contraintes de l’environnement externe (Ferner, 1997; Kostova, 1999; Scott, 1991). Les résultats de notre recherche démontrent que, malgré l’existence de contraintes de nature institutionnelle et culturelle, l’efficacité du processus de transfert des connaissances associées à la gestion des ressources humaines relève davantage des conditions organisationnelles internes et, plus particulièrement, de l’implication de la haute direction, du rôle accordé à la fonction RH et de l’alignement entre la stratégie corporative, la stratégie RH et la culture organisationnelle. Sur le plan méthodologique, il s’agit d’une recherche exploratoire qualitative menée auprès de trois EM (2 canadiennes et 1 française) oeuvrant dans les secteurs de la métallurgie et des télécommunications. Les données empiriques proviennent de 17 entrevues approfondies que nous ont accordées, au Canada, en France, en Allemagne et en Suisse des cadres responsables de la gestion des ressources humaines, affectés au siège social des EM en question ou œuvrant au sein de leurs filiales, et de sources documentaires secondaires.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La dystrophie cornéenne endothéliale de Fuchs (FECD, pour l’abréviation du terme anglais « Fuchs endothelial corneal dystrophy ») est une maladie de l'endothélium cornéen. Sa pathogenèse est mal connue. Aucun traitement médical n’est efficace. Le seul traitement existant est chirurgical et consiste dans le remplacement de l’endothélium pathologique par un endothélium sain provenant de cornées de la Banque des yeux. Le traitement chirurgical, en revanche, comporte 10% de rejet immunologique. Des modèles expérimentaux sont donc nécessaires afin de mieux comprendre cette maladie ainsi que pour le développement de traitements alternatifs. Le but général de cette thèse est de développer un modèle expérimental de la FECD en utilisant le génie tissulaire. Ceci a été réalisé en trois étapes. 1) Tout d'abord, l'endothélium cornéen a été reconstruit par génie tissulaire en utilisant des cellules endothéliales en culture, provenant de patients atteints de FECD. Ce modèle a ensuite été caractérisé in vitro. Brièvement, les cellules endothéliales cornéennes FECD ont été isolées à partir de membranes de Descemet prélevées lors de greffes de cornée. Les cellules au deuxième ou troisième passages ont ensuite été ensemencées sur une cornée humaine préalablement décellularisée. Suivant 2 semaines de culture, les endothélia cornéens reconstruits FECD (n = 6) ont été évalués à l'aide d'histologie, de microscopie électronique à transmission et d’immunomarquages de différentes protéines. Les endothélia cornéens reconstruits FECD ont formé une monocouche de cellules polygonales bien adhérées à la membrane de Descemet. Les immunomarquages ont démontré la présence des protéines importantes pour la fonctionnalité de l’endothélium cornéen telles que Na+-K+/ATPase α1 et Na+/HCO3-, ainsi qu’une expression faible et uniforme de la protéine clusterine. 2) Deux techniques chirurgicales (DSAEK ; pour « Descemet stripping automated endothelial keratoplasty » et la kératoplastie pénétrante) ont été comparées pour la transplantation cornéenne dans le modèle animal félin. Les paramètres comparés incluaient les défis chirurgicaux et les résultats cliniques. La technique « DSAEK » a été difficile à effectuer dans le modèle félin. Une formation rapide de fibrine a été observée dans tous les cas DSAEK (n = 5). 3) Finalement, la fonctionnalité in vivo des endothélia cornéens reconstruits FECD a été évaluée (n = 7). Les évaluations in vivo comprenaient la transparence, la pachymétrie et la tomographie par cohérence optique. Les évaluations post-mortem incluaient la morphométrie des cellules endothéliales, la microscopie électronique à transmission et des immunomarquage de protéines liées à la fonctionnalité. Après la transplantation, la pachymétrie a progressivement diminué et la transparence a progressivement augmenté. Sept jours après la transplantation, 6 des 7 greffes étaient claires. La microscopie électronique à transmission a montré la présence de matériel fibrillaire sous-endothélial dans toutes les greffes d’endothelia reconstruits FECD. Les endothélia reconstruits exprimaient aussi des protéines Na+-K+/ATPase et Na+/HCO3-. En résumé, cette thèse démontre que les cellules endothéliales de la cornée à un stade avancé FECD peuvent être utilisées pour reconstruire un endothélium cornéen par génie tissulaire. La kératoplastie pénétrante a été démontrée comme étant la procédure la plus appropriée pour transplanter ces tissus reconstruits dans l’œil du modèle animal félin. La restauration de l'épaisseur cornéenne et de la transparence démontrent que les greffons reconstruits FECD sont fonctionnels in vivo. Ces nouveaux modèles FECD démontrent une réhabilitation des cellules FECD, permettant d’utiliser le génie tissulaire pour reconstruire des endothelia fonctionnels à partir de cellules dystrophiques. Les applications potentielles sont nombreuses, y compris des études physiopathologiques et pharmacologiques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Né à Constantine en 1905 É.C., décédé à Alger en 1973 É.C., Malek Bennabi ce penseur algérien et réformateur musulman peu connu, s’est attaché durant sa vie, à étudier et analyser les problèmes liés à la civilisation du Monde arabo-musulman. Ingénieur sorti de l’Ecole Polytechnique de Paris, Malek Bennabi a jumelé deux cultures différentes : la culture islamique et la culture occidentale.C'est pour cette raison que ses analyses sont ornées d'expertise et d'expérience, d'innovation et d'émancipation. Sa réflexion est pleine d'animation, il a plus d'une vingtaine d'ouvrages, traitant des thèmes variés : la civilisation, le dialogue civilisationnel, la culture, l'idéologie, les problèmes de société, l’orientalisme, la démocratie, le système colonial ainsi que de sujets relatifs au phénomène coranique. À travers ses écrits, il s’attache à étudier et à analyser les raisons de la stagnation de la Société arabo-musulmane et les conditions d’une nouvelle renaissance. Malek Bennabi, s’attèle à tenter d’éveiller les consciences pour une renaissance de cette société. Ayant vécu l’expérience coloniale et post – coloniale dans son pays, Malek Bennabi demeurera tourmenté par les obstacles de développement. Pour lui l’accession à l’indépendance et la construction d’un État moderne n’auront pas suffi à arracher la société au sous-développement économique, social et culturel. En effectuant une relecture du patrimoine islamique, tout comme l’ont fait deux penseurs décédés récemment : Al Djâbiri et Mohamed Arkoun. Malek Bennabi cherchait à offrir une énergie sociale capable à arracher les sociétés arabo-musulmanes de leur sous-développement et décadence. C’est sous cet angle, que nous allons, dans ce mémoire, mener notre réflexion en l’articulant autour de la problématique centrale qui traverse la pensée de Malek Bennabi, à savoir celle du renouveau de la société islamique marquée par une grande diversité. Nous allons tenter de répondre à plusieurs questions, dont la principale est la suivante : est-ce que Malek Bennabi a présenté, à travers ses idées, de la nouveauté pour changer la réalité arabo-musulmane?

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Indian marine engineers are renowned for employment globally due to their knowledge, skill and reliability. This praiseworthy status has been achieved mainly due to the systematic training imparted to marine engineering cadets. However, in an era of advancing technology, marine engineering training has to remain dynamic to imbibe latest technology as well as to meet the demands of the shipping industry. New subjects of studies have to be included in the curriculum in a timely manner taking into consideration the industry requirements and best practices in shipping. Technical competence of marine engineers also has to be subjected to changes depending upon the needs of the ever growing and over regulated shipping industry. Besides. certain soft skills are to be developed and improved amongst the marine engineers in order to alter or amend the personality traits leading to their career success.If timely corrective action is taken. Indian marine engineers can be in still greater demand for employment in global maritime field. In order to enhance the employability of our mmine engineers by improving their quality, a study of marine engineers in general and class IV marine engineers in particular was conducted based on three distinct surveys, viz., survey among senior marine engineers, survey among employers of marine engineers and survey of class IV marine engineers themselves.The surveys have been planned and questionnaires have been designed to focus the study of marine engineer officer class IV from the point of view of the three distinct groups of maritime personnels. As a result of this, the strength and weakness of class IV marine engineers are identified with regard to their performance on board ships, acquisition of necessary technical skills. employability and career success. The criteria of essential qualities of a marine engineer are classified as academic, technical, social, psychological. physical, mental, emergency responsive, communicative and leadership, and have been assessed for a practicing marine engineer by statistical analysis of data collected from surveys. These are assessed for class IV marine engineers from the point of view of senior marine engineers and employers separately. The Endings are delineated and graphically depicted in this thesis.Besides. six pertinent personality traits of a marine engineer viz. self esteem. learning style. decision making. motivation. team work and listening self inventory have been subjected to study and their correlation with career success have been established wherever possible. This is carried out to develop a theoretical framework to understand what leads a marine engineer to his career attainment. This enables the author to estimate the personality strengths and weaknesses of a serving marine engineer and eventually to deduce possible corrective measures or modifications in marine engineering training in India.Maritime training is largely based on International Conventions on Standard of Training. Certification and Watch keeping for Seafarers 1995. its associated Code and Merchant Shipping (STCW for Seafarers) Rules 1998. Further, Maritime Education, Training and Assessment (META) Manual was subjected to a critical scrutiny and relevant Endings of thc surveys arc superimposed on the existing rule requirement and curriculum. Views of senior marine engineers and executives of various shipping companies are taken into account before arriving at the revision of syllabus of marine engineering courses. Modifications in the pattern of workshop and sea service for graduate mechanical engineering trainees are recommended. Desirable age brackets of junior engineers and chief engineers. use of Training and Assessment Record book (TAR Book) during training etc. have also been evaluated.As a result of the pedagogic introspection of the existing system of marine engineering training in India. in this thesis, a revised pattern of workshop training of six months duration for graduate mechanical engineers. revised pattern of sea service training of one year duration and modified now diagram incorporating the above have been arrived at. Effects of various personality traits on career success have been established along with certain findings for improvement of desirable personality traits of marine engineers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Chapter 1 presents a brief note on the state at which the construction industry stands at present, bringing into focus the significance of the critical study. Relevance of the study, area of investigation and objectives of the study are outlined in this chapter. The 2nd chapter presents a review of the literature on the relevant areas. In the third chapter an analysis on time and cost overrun in construction highlighting the major factors responsible for it has been done. A couple of case studies to estimate loss to the nation on account of delay in construction have been presented in the chapter. The need for an appropriate estimate and a competent contractor has been emphasised for improving effectiveness in the project implementation. Certain useful equations and thoughts have been formulated on this area in this chapter that can be followed in State PWD and other Govt. organisations. Case studies on project implementation of major projects undertaken by Government sponsored/supported organizations in Kerala have been dealt with in Chapter 4. A detailed description of the project of Kerala Legislature Complex with a critical analysis has been given in this chapter. A detailed account of the investigations carried out on the construction of International Stadium, a sports project of Greater Cochin Development Authority is included here. The project details of Cochin International Airport at Nedumbassery, its promoters and contractors are also discussed in Chapter 4. Various aspects of implementation which led the above projects successful have been discussed in chapter 5. The data collected were analysed through discussion and perceptions to arrive at certain conclusions. The emergence of front-loaded contract and its impact on economics of the project execution are dealt with in this chapter. Analysis of delays in respect of the various project narrated in chapter 3 has been done here. The root causes of the project time and overrun and its remedial measures are also enlisted in this chapter. Study of cost and time overrun of any construction project IS a part of construction management. Under the present environment of heavy investment on construction activities in India, the consequences of mismanagement many a time lead to excessive expenditure which are not be avoidable. Cost consciousness, therefore has to be keener than ever before. Optimization in investment can be achieved by improved dynamism in construction management. The successful completion of coristruction projects within the specified programme, optimizing three major attributes of the process - quality, schedule and costs - has become the most valuable and challenging task for the engineer - managers to perform. So, the various aspects of construction management such as cost control, schedule control, quality assurance, management techniques etc. have also been discussed in this fifth chapter. Chapter 6 summarises the conclusions drawn from the above criticalr1 of rhajor construction projects in Kerala.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The demand for new telecommunication services requiring higher capacities, data rates and different operating modes have motivated the development of new generation multi-standard wireless transceivers. A multi-standard design often involves extensive system level analysis and architectural partitioning, typically requiring extensive calculations. In this research, a decimation filter design tool for wireless communication standards consisting of GSM, WCDMA, WLANa, WLANb, WLANg and WiMAX is developed in MATLAB® using GUIDE environment for visual analysis. The user can select a required wireless communication standard, and obtain the corresponding multistage decimation filter implementation using this toolbox. The toolbox helps the user or design engineer to perform a quick design and analysis of decimation filter for multiple standards without doing extensive calculation of the underlying methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The country has witnessed tremendous increase in the vehicle population and increased axle loading pattern during the last decade, leaving its road network overstressed and leading to premature failure. The type of deterioration present in the pavement should be considered for determining whether it has a functional or structural deficiency, so that appropriate overlay type and design can be developed. Structural failure arises from the conditions that adversely affect the load carrying capability of the pavement structure. Inadequate thickness, cracking, distortion and disintegration cause structural deficiency. Functional deficiency arises when the pavement does not provide a smooth riding surface and comfort to the user. This can be due to poor surface friction and texture, hydro planning and splash from wheel path, rutting and excess surface distortion such as potholes, corrugation, faulting, blow up, settlement, heaves etc. Functional condition determines the level of service provided by the facility to its users at a particular time and also the Vehicle Operating Costs (VOC), thus influencing the national economy. Prediction of the pavement deterioration is helpful to assess the remaining effective service life (RSL) of the pavement structure on the basis of reduction in performance levels, and apply various alternative designs and rehabilitation strategies with a long range funding requirement for pavement preservation. In addition, they can predict the impact of treatment on the condition of the sections. The infrastructure prediction models can thus be classified into four groups, namely primary response models, structural performance models, functional performance models and damage models. The factors affecting the deterioration of the roads are very complex in nature and vary from place to place. Hence there is need to have a thorough study of the deterioration mechanism under varied climatic zones and soil conditions before arriving at a definite strategy of road improvement. Realizing the need for a detailed study involving all types of roads in the state with varying traffic and soil conditions, the present study has been attempted. This study attempts to identify the parameters that affect the performance of roads and to develop performance models suitable to Kerala conditions. A critical review of the various factors that contribute to the pavement performance has been presented based on the data collected from selected road stretches and also from five corporations of Kerala. These roads represent the urban conditions as well as National Highways, State Highways and Major District Roads in the sub urban and rural conditions. This research work is a pursuit towards a study of the road condition of Kerala with respect to varying soil, traffic and climatic conditions, periodic performance evaluation of selected roads of representative types and development of distress prediction models for roads of Kerala. In order to achieve this aim, the study is focused into 2 parts. The first part deals with the study of the pavement condition and subgrade soil properties of urban roads distributed in 5 Corporations of Kerala; namely Thiruvananthapuram, Kollam, Kochi, Thrissur and Kozhikode. From selected 44 roads, 68 homogeneous sections were studied. The data collected on the functional and structural condition of the surface include pavement distress in terms of cracks, potholes, rutting, raveling and pothole patching. The structural strength of the pavement was measured as rebound deflection using Benkelman Beam deflection studies. In order to collect the details of the pavement layers and find out the subgrade soil properties, trial pits were dug and the in-situ field density was found using the Sand Replacement Method. Laboratory investigations were carried out to find out the subgrade soil properties, soil classification, Atterberg limits, Optimum Moisture Content, Field Moisture Content and 4 days soaked CBR. The relative compaction in the field was also determined. The traffic details were also collected by conducting traffic volume count survey and axle load survey. From the data thus collected, the strength of the pavement was calculated which is a function of the layer coefficient and thickness and is represented as Structural Number (SN). This was further related to the CBR value of the soil and the Modified Structural Number (MSN) was found out. The condition of the pavement was represented in terms of the Pavement Condition Index (PCI) which is a function of the distress of the surface at the time of the investigation and calculated in the present study using deduct value method developed by U S Army Corps of Engineers. The influence of subgrade soil type and pavement condition on the relationship between MSN and rebound deflection was studied using appropriate plots for predominant types of soil and for classified value of Pavement Condition Index. The relationship will be helpful for practicing engineers to design the overlay thickness required for the pavement, without conducting the BBD test. Regression analysis using SPSS was done with various trials to find out the best fit relationship between the rebound deflection and CBR, and other soil properties for Gravel, Sand, Silt & Clay fractions. The second part of the study deals with periodic performance evaluation of selected road stretches representing National Highway (NH), State Highway (SH) and Major District Road (MDR), located in different geographical conditions and with varying traffic. 8 road sections divided into 15 homogeneous sections were selected for the study and 6 sets of continuous periodic data were collected. The periodic data collected include the functional and structural condition in terms of distress (pothole, pothole patch, cracks, rutting and raveling), skid resistance using a portable skid resistance pendulum, surface unevenness using Bump Integrator, texture depth using sand patch method and rebound deflection using Benkelman Beam. Baseline data of the study stretches were collected as one time data. Pavement history was obtained as secondary data. Pavement drainage characteristics were collected in terms of camber or cross slope using camber board (slope meter) for the carriage way and shoulders, availability of longitudinal side drain, presence of valley, terrain condition, soil moisture content, water table data, High Flood Level, rainfall data, land use and cross slope of the adjoining land. These data were used for finding out the drainage condition of the study stretches. Traffic studies were conducted, including classified volume count and axle load studies. From the field data thus collected, the progression of each parameter was plotted for all the study roads; and validated for their accuracy. Structural Number (SN) and Modified Structural Number (MSN) were calculated for the study stretches. Progression of the deflection, distress, unevenness, skid resistance and macro texture of the study roads were evaluated. Since the deterioration of the pavement is a complex phenomena contributed by all the above factors, pavement deterioration models were developed as non linear regression models, using SPSS with the periodic data collected for all the above road stretches. General models were developed for cracking progression, raveling progression, pothole progression and roughness progression using SPSS. A model for construction quality was also developed. Calibration of HDM–4 pavement deterioration models for local conditions was done using the data for Cracking, Raveling, Pothole and Roughness. Validation was done using the data collected in 2013. The application of HDM-4 to compare different maintenance and rehabilitation options were studied considering the deterioration parameters like cracking, pothole and raveling. The alternatives considered for analysis were base alternative with crack sealing and patching, overlay with 40 mm BC using ordinary bitumen, overlay with 40 mm BC using Natural Rubber Modified Bitumen and an overlay of Ultra Thin White Topping. Economic analysis of these options was done considering the Life Cycle Cost (LCC). The average speed that can be obtained by applying these options were also compared. The results were in favour of Ultra Thin White Topping over flexible pavements. Hence, Design Charts were also plotted for estimation of maximum wheel load stresses for different slab thickness under different soil conditions. The design charts showed the maximum stress for a particular slab thickness and different soil conditions incorporating different k values. These charts can be handy for a design engineer. Fuzzy rule based models developed for site specific conditions were compared with regression models developed using SPSS. The Riding Comfort Index (RCI) was calculated and correlated with unevenness to develop a relationship. Relationships were developed between Skid Number and Macro Texture of the pavement. The effort made through this research work will be helpful to highway engineers in understanding the behaviour of flexible pavements in Kerala conditions and for arriving at suitable maintenance and rehabilitation strategies. Key Words: Flexible Pavements – Performance Evaluation – Urban Roads – NH – SH and other roads – Performance Models – Deflection – Riding Comfort Index – Skid Resistance – Texture Depth – Unevenness – Ultra Thin White Topping

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Genetic programming is known to provide good solutions for many problems like the evolution of network protocols and distributed algorithms. In such cases it is most likely a hardwired module of a design framework that assists the engineer to optimize specific aspects of the system to be developed. It provides its results in a fixed format through an internal interface. In this paper we show how the utility of genetic programming can be increased remarkably by isolating it as a component and integrating it into the model-driven software development process. Our genetic programming framework produces XMI-encoded UML models that can easily be loaded into widely available modeling tools which in turn posses code generation as well as additional analysis and test capabilities. We use the evolution of a distributed election algorithm as an example to illustrate how genetic programming can be combined with model-driven development. This example clearly illustrates the advantages of our approach – the generation of source code in different programming languages.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We provide a new method for systematically structuring the top-down level of ontologies. It is based on an interactive, top-down knowledge acquisition process, which assures that the knowledge engineer considers all possible cases while avoiding redundant acquisition. The method is suited especially for creating/merging the top part(s) of the ontologies, where high accuracy is required, and for supporting the merging of two (or more) ontologies on that level.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ontologies have been established for knowledge sharing and are widely used as a means for conceptually structuring domains of interest. With the growing usage of ontologies, the problem of overlapping knowledge in a common domain becomes critical. In this short paper, we address two methods for merging ontologies based on Formal Concept Analysis: FCA-Merge and ONTEX. --- FCA-Merge is a method for merging ontologies following a bottom-up approach which offers a structural description of the merging process. The method is guided by application-specific instances of the given source ontologies. We apply techniques from natural language processing and formal concept analysis to derive a lattice of concepts as a structural result of FCA-Merge. The generated result is then explored and transformed into the merged ontology with human interaction. --- ONTEX is a method for systematically structuring the top-down level of ontologies. It is based on an interactive, top-down- knowledge acquisition process, which assures that the knowledge engineer considers all possible cases while avoiding redundant acquisition. The method is suited especially for creating/merging the top part(s) of the ontologies, where high accuracy is required, and for supporting the merging of two (or more) ontologies on that level.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since no physical system can ever be completely isolated from its environment, the study of open quantum systems is pivotal to reliably and accurately control complex quantum systems. In practice, reliability of the control field needs to be confirmed via certification of the target evolution while accuracy requires the derivation of high-fidelity control schemes in the presence of decoherence. In the first part of this thesis an algebraic framework is presented that allows to determine the minimal requirements on the unique characterisation of arbitrary unitary gates in open quantum systems, independent on the particular physical implementation of the employed quantum device. To this end, a set of theorems is devised that can be used to assess whether a given set of input states on a quantum channel is sufficient to judge whether a desired unitary gate is realised. This allows to determine the minimal input for such a task, which proves to be, quite remarkably, independent of system size. These results allow to elucidate the fundamental limits regarding certification and tomography of open quantum systems. The combination of these insights with state-of-the-art Monte Carlo process certification techniques permits a significant improvement of the scaling when certifying arbitrary unitary gates. This improvement is not only restricted to quantum information devices where the basic information carrier is the qubit but it also extends to systems where the fundamental informational entities can be of arbitary dimensionality, the so-called qudits. The second part of this thesis concerns the impact of these findings from the point of view of Optimal Control Theory (OCT). OCT for quantum systems utilises concepts from engineering such as feedback and optimisation to engineer constructive and destructive interferences in order to steer a physical process in a desired direction. It turns out that the aforementioned mathematical findings allow to deduce novel optimisation functionals that significantly reduce not only the required memory for numerical control algorithms but also the total CPU time required to obtain a certain fidelity for the optimised process. The thesis concludes by discussing two problems of fundamental interest in quantum information processing from the point of view of optimal control - the preparation of pure states and the implementation of unitary gates in open quantum systems. For both cases specific physical examples are considered: for the former the vibrational cooling of molecules via optical pumping and for the latter a superconducting phase qudit implementation. In particular, it is illustrated how features of the environment can be exploited to reach the desired targets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I present a novel design methodology for the synthesis of automatic controllers, together with a computational environment---the Control Engineer's Workbench---integrating a suite of programs that automatically analyze and design controllers for high-performance, global control of nonlinear systems. This work demonstrates that difficult control synthesis tasks can be automated, using programs that actively exploit and efficiently represent knowledge of nonlinear dynamics and phase space and effectively use the representation to guide and perform the control design. The Control Engineer's Workbench combines powerful numerical and symbolic computations with artificial intelligence reasoning techniques. As a demonstration, the Workbench automatically designed a high-quality maglev controller that outperforms a previous linear design by a factor of 20.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis I present a language for instructing a sheet of identically-programmed, flexible, autonomous agents (``cells'') to assemble themselves into a predetermined global shape, using local interactions. The global shape is described as a folding construction on a continuous sheet, using a set of axioms from paper-folding (origami). I provide a means of automatically deriving the cell program, executed by all cells, from the global shape description. With this language, a wide variety of global shapes and patterns can be synthesized, using only local interactions between identically-programmed cells. Examples include flat layered shapes, all plane Euclidean constructions, and a variety of tessellation patterns. In contrast to approaches based on cellular automata or evolution, the cell program is directly derived from the global shape description and is composed from a small number of biologically-inspired primitives: gradients, neighborhood query, polarity inversion, cell-to-cell contact and flexible folding. The cell programs are robust, without relying on regular cell placement, global coordinates, or synchronous operation and can tolerate a small amount of random cell death. I show that an average cell neighborhood of 15 is sufficient to reliably self-assemble complex shapes and geometric patterns on randomly distributed cells. The language provides many insights into the relationship between local and global descriptions of behavior, such as the advantage of constructive languages, mechanisms for achieving global robustness, and mechanisms for achieving scale-independent shapes from a single cell program. The language suggests a mechanism by which many related shapes can be created by the same cell program, in the manner of D'Arcy Thompson's famous coordinate transformations. The thesis illuminates how complex morphology and pattern can emerge from local interactions, and how one can engineer robust self-assembly.