926 resultados para Complex engineering problems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The significance of services as business and human activities has increased dramatically throughout the world in the last three decades. Becoming a more and more competitive and efficient service provider while still being able to provide unique value opportunities for customers requires new knowledge and ideas. Part of this knowledge is created and utilized in daily activities in every service organization, but not all of it, and therefore an emerging phenomenon in the service context is information awareness. Terms like big data and Internet of things are not only modern buzz-words but they are also describing urgent requirements for a new type of competences and solutions. When the amount of information increases and the systems processing information become more efficient and intelligent, it is the human understanding and objectives that may get separated from the automated processes and technological innovations. This is an important challenge and the core driver for this dissertation: What kind of information is created, possessed and utilized in the service context, and even more importantly, what information exists but is not acknowledged or used? In this dissertation the focus is on the relationship between service design and service operations. Reframing this relationship refers to viewing the service system from the architectural perspective. The selected perspective allows analysing the relationship between design activities and operational activities as an information system while maintaining the tight connection to existing service research contributions and approaches. This type of an innovative approach is supported by research methodology that relies on design science theory. The methodological process supports the construction of a new design artifact based on existing theoretical knowledge, creation of new innovations and testing the design artifact components in real service contexts. The relationship between design and operations is analysed in the health care and social care service systems. The existing contributions in service research tend to abstract services and service systems as value creation, working or interactive systems. This dissertation adds an important information processing system perspective to the research. The main contribution focuses on the following argument: Only part of the service information system is automated and computerized, whereas a significant part of information processing is embedded in human activities, communication and ad-hoc reactions. The results indicate that the relationship between service design and service operations is more complex and dynamic than the existing scientific and managerial models tend to view it. Both activities create, utilize, mix and share information, making service information management a necessary but relatively unknown managerial task. On the architectural level, service system -specific elements seem to disappear, but access to more general information elements and processes can be found. While this dissertation focuses on conceptual-level design artifact construction, the results provide also very practical implications for service providers. Personal, visual and hidden activities of service, and more importantly all changes that take place in any service system have also an information dimension. Making this information dimension visual and prioritizing the processed information based on service dimensions is likely to provide new opportunities to increase activities and provide a new type of service potential for customers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of this master’s thesis is to examine if Weibull analysis is suitable method for warranty forecasting in the Case Company. The Case Company has used Reliasoft’s Weibull++ software, which is basing on the Weibull method, but the Company has noticed that the analysis has not given right results. This study was conducted making Weibull simulations in different profit centers of the Case Company and then comparing actual cost and forecasted cost. Simula-tions were made using different time frames and two methods for determining future deliveries. The first sub objective is to examine, which parameters of simulations will give the best result to each profit center. The second sub objective of this study is to create a simple control model for following forecasted costs and actual realized costs. The third sub objective is to document all Qlikview-parameters of profit centers. This study is a constructive research, and solutions for company’s problems are figured out in this master’s thesis. In the theory parts were introduced quality issues, for example; what is quality, quality costing and cost of poor quality. Quality is one of the major aspects in the Case Company, so understand-ing the link between quality and warranty forecasting is important. Warranty management was also introduced and other different tools for warranty forecasting. The Weibull method and its mathematical properties and reliability engineering were introduced. The main results of this master’s thesis are that the Weibull analysis forecasted too high costs, when calculating provision. Although, some forecasted values of profit centers were lower than actual values, the method works better for planning purposes. One of the reasons is that quality improving or alternatively quality decreasing is not showing in the results of the analysis in the short run. The other reason for too high values is that the products of the Case Company are com-plex and analyses were made in the profit center-level. The Weibull method was developed for standard products, but products of the Case Company consists of many complex components. According to the theory, this method was developed for homogeneous-data. So the most im-portant notification is that the analysis should be made in the product level, not the profit center level, when the data is more homogeneous.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today’s world because of the rapid advancement in the field of technology and business, the requirements are not clear, and they are changing continuously in the development process. Due to those changes in the requirements the software development becomes very difficult. Use of traditional software development methods such as waterfall method is not a good option, as the traditional software development methods are not flexible to requirements and the software can be late and over budget. For developing high quality software that satisfies the customer, the organizations can use software development methods, such as agile methods which are flexible to change requirements at any stage in the development process. The agile methods are iterative and incremental methods that can accelerate the delivery of the initial business values through the continuous planning and feedback, and there is close communication between the customer and developers. The main purpose of the current thesis is to find out the problems in traditional software development and to show how agile methods reduced those problems in software development. The study also focuses the different success factors of agile methods, the success rate of agile projects and comparison between traditional and agile software development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recombinant human adenovirus (Ad) vectors are being extensively explored for their use in gene therapy and recombinant vaccines. Ad vectors are attractive for many reasons, including the fact that (1) they are relatively safe, based on their use as live oral vaccines, (2) they can accept large transgene inserts, (3) they can infect dividing and postmitotic cells, and (4) they can be produced to high titers. However, there are also a number of major problems associated with Ad vectors, including transient foreign gene expression due to host cellular immune responses, problems with humoral immunity, and the creation of replication competent adenoviruses (RCA). Most Ad vectors contain deletions in the E1 region that allow for insertion of a transgene. However, the E1 gene products are required for replication and thus must be supplied in trans by a helper ceillille that will allow for the growth and packaging of the defective virus. For this purpose the 293 cell line (Graham et al., 1977) is used most often; however, homologous recombination between the vector and the cell line often results in the generation of RCA. The presence of RCA in batches of adenoviral vectors for clinical use is a safety risk because tlley . may result in the mobilization and spread of the replication-defective vector viruses, and in significant tissue damage and pathogenicity. The present research focused on the alteration of the 293 cell line such that RCA formation can be eliminated. The strategy to modify the 293 cells involved the removal of the first 380 bp of the adenovirus genome through the process of homologous recombination. The first step towards this goal involved identifying and cloning the left-end cellular-viral jUl1ction from 293 cells to assemble sequences required for homologous recombination. Polymerase chain reaction (PCR) was performed to clone the junction, and the clone was verified through sequencing. The plasn1id PAM2 was then constructed, which served as the targeting cassette used to modify the 293 cells. The cassette consisted of (1) the cellular-viral junction as the left-end region of homology, (2) the neo gene to use for positive selection upon tranfection into 293 cells, (3) the adenoviral genome from bp 380 to bp 3438 as the right-end region of homology, and (4) the HSV-tk gene to use for negative selection. The plasmid PAM2 was linearized to produce a double strand break outside the region of homology, and transfected into 293 cells using the calcium-phosphate technique. Cells were first selected for their resistance to the drug G418, and subsequently for their resistance to the drug Gancyclovir (GANC). From 17 transfections, 100 pools of G418f and GANCf cells were picked using cloning lings and expanded for screening. Genomic DNA was isolated from the pools and screened for the presence of the 380 bps using PCR. Ten of the most promising pools were diluted to single cells and expanded in order to isolate homogeneous cell lines. From these, an additional 100 G41Sf and GANef foci were screened. These preliminary screening results appear promising for the detection of the desired cell line. Future work would include further cloning and purification of the promising cell lines that have potentially undergone homologous recombination, in order to isolate a homogeneous cell line of interest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complex networks have recently attracted a significant amount of research attention due to their ability to model real world phenomena. One important problem often encountered is to limit diffusive processes spread over the network, for example mitigating pandemic disease or computer virus spread. A number of problem formulations have been proposed that aim to solve such problems based on desired network characteristics, such as maintaining the largest network component after node removal. The recently formulated critical node detection problem aims to remove a small subset of vertices from the network such that the residual network has minimum pairwise connectivity. Unfortunately, the problem is NP-hard and also the number of constraints is cubic in number of vertices, making very large scale problems impossible to solve with traditional mathematical programming techniques. Even many approximation algorithm strategies such as dynamic programming, evolutionary algorithms, etc. all are unusable for networks that contain thousands to millions of vertices. A computationally efficient and simple approach is required in such circumstances, but none currently exist. In this thesis, such an algorithm is proposed. The methodology is based on a depth-first search traversal of the network, and a specially designed ranking function that considers information local to each vertex. Due to the variety of network structures, a number of characteristics must be taken into consideration and combined into a single rank that measures the utility of removing each vertex. Since removing a vertex in sequential fashion impacts the network structure, an efficient post-processing algorithm is also proposed to quickly re-rank vertices. Experiments on a range of common complex network models with varying number of vertices are considered, in addition to real world networks. The proposed algorithm, DFSH, is shown to be highly competitive and often outperforms existing strategies such as Google PageRank for minimizing pairwise connectivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Exploring the new science of emergence allows us to create a very different classroom than how the modern classroom has been conceptualised under the mentality of efficiency and output. Working on the whole person, and not just the mind, we see a shift from the epistemic pillars of truth to more ontological concerns as regards student achievement in our post-Modern and critical discourses. It is important to understand these shifts and how we are to transition our own perception and mentality not only in our research methodologies but also our approach to conceptualisations of issues in education and sustainability. We can no longer think linearly to approach complex problems or advocate for education and disregard our interconnectedness insofar as it enhances our children’s education. We must, therefore, contemplate and transition to a world that is ecological and not mechanical, complex and not complicated—in essence, we must work to link mind-body with self-environment and transcend these in order to bring about an integration toward a sustainable future. A fundamental shift in consciousness and perception may implicate our nature of creating dichotomous entities in our own microcosms, yet postmodern theorists assume, a priori, that these dualities can be bridged in naturalism alone. I, on the other hand, embrace metaphysics to understand the implicated modern classroom in a hierarchical context and ask: is not the very omission of metaphysics in postmodern discourse a symptom from an education whose foundation was built in its absence? The very dereliction of ancient wisdom in education is very peculiar indeed. Western mindfulness may play a vital component in consummating pragmatic idealism, but only under circumstances admitting metaphysics can we truly transcend our limitations, thereby placing Eastern Mindfulness not as an ecological component, but as an ecological and metaphysical foundation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT Photosystem II (PSII) of oxygenic photosynthesis has the unique ability to photochemically oxidize water, extracting electrons from water to result in the evolution of oxygen gas while depositing these electrons to the rest of the photosynthetic machinery which in turn reduces CO2 to carbohydrate molecules acting as fuel for the cell. Unfortunately, native PSII is unstable and not suitable to be used in industrial applications. Consequently, there is a need to reverse-engineer the water oxidation photochemical reactions of PSII using solution-stable proteins. But what does it take to reverse-engineer PSII’s reactions? PSII has the pigment with the highest oxidation potential in nature known as P680. The high oxidation of P680 is in fact the driving force for water oxidation. P680 is made up of a chlorophyll a dimer embedded inside the relatively hydrophobic transmembrane environment of PSII. In this thesis, the electrostatic factors contributing to the high oxidation potential of P680 are described. PSII oxidizes water in a specialized metal cluster known as the Oxygen Evolving Complex (OEC). The pathways that water can take to enter the relatively hydrophobic region of PSII are described as well. A previous attempt to reverse engineer PSII’s reactions using the protein scaffold of E. coli’s Bacterioferritin (BFR) existed. The oxidation potential of the pigment used for the BFR ‘reaction centre’ was measured and the protein effects calculated in a similar fashion to how P680 potentials were calculated in PSII. The BFR-RC’s pigment oxidation potential was found to be 0.57 V, too low to oxidize water or tyrosine like PSII. We suggest that the observed tyrosine oxidation in BRF-RC could be driven by the ZnCe6 di-cation. In order to increase the efficiency of iii tyrosine oxidation, and ultimately oxidize water, the first potential of ZnCe6 would have to attain a value in excess of 0.8 V. The results were used to develop a second generation of BFR-RC using a high oxidation pigment. The hypervalent phosphorous porphyrin forms a radical pair that can be observed using Transient Electron Paramagnetic Resonance (TR-EPR). Finally, the results from this thesis are discussed in light of the development of solar fuel producing systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le problème inverse en électroencéphalographie (EEG) est la localisation de sources de courant dans le cerveau utilisant les potentiels de surface sur le cuir chevelu générés par ces sources. Une solution inverse implique typiquement de multiples calculs de potentiels de surface sur le cuir chevelu, soit le problème direct en EEG. Pour résoudre le problème direct, des modèles sont requis à la fois pour la configuration de source sous-jacente, soit le modèle de source, et pour les tissues environnants, soit le modèle de la tête. Cette thèse traite deux approches bien distinctes pour la résolution du problème direct et inverse en EEG en utilisant la méthode des éléments de frontières (BEM): l’approche conventionnelle et l’approche réciproque. L’approche conventionnelle pour le problème direct comporte le calcul des potentiels de surface en partant de sources de courant dipolaires. D’un autre côté, l’approche réciproque détermine d’abord le champ électrique aux sites des sources dipolaires quand les électrodes de surfaces sont utilisées pour injecter et retirer un courant unitaire. Le produit scalaire de ce champ électrique avec les sources dipolaires donne ensuite les potentiels de surface. L’approche réciproque promet un nombre d’avantages par rapport à l’approche conventionnelle dont la possibilité d’augmenter la précision des potentiels de surface et de réduire les exigences informatiques pour les solutions inverses. Dans cette thèse, les équations BEM pour les approches conventionnelle et réciproque sont développées en utilisant une formulation courante, la méthode des résidus pondérés. La réalisation numérique des deux approches pour le problème direct est décrite pour un seul modèle de source dipolaire. Un modèle de tête de trois sphères concentriques pour lequel des solutions analytiques sont disponibles est utilisé. Les potentiels de surfaces sont calculés aux centroïdes ou aux sommets des éléments de discrétisation BEM utilisés. La performance des approches conventionnelle et réciproque pour le problème direct est évaluée pour des dipôles radiaux et tangentiels d’excentricité variable et deux valeurs très différentes pour la conductivité du crâne. On détermine ensuite si les avantages potentiels de l’approche réciproquesuggérés par les simulations du problème direct peuvent êtres exploités pour donner des solutions inverses plus précises. Des solutions inverses à un seul dipôle sont obtenues en utilisant la minimisation par méthode du simplexe pour à la fois l’approche conventionnelle et réciproque, chacun avec des versions aux centroïdes et aux sommets. Encore une fois, les simulations numériques sont effectuées sur un modèle à trois sphères concentriques pour des dipôles radiaux et tangentiels d’excentricité variable. La précision des solutions inverses des deux approches est comparée pour les deux conductivités différentes du crâne, et leurs sensibilités relatives aux erreurs de conductivité du crâne et au bruit sont évaluées. Tandis que l’approche conventionnelle aux sommets donne les solutions directes les plus précises pour une conductivité du crâne supposément plus réaliste, les deux approches, conventionnelle et réciproque, produisent de grandes erreurs dans les potentiels du cuir chevelu pour des dipôles très excentriques. Les approches réciproques produisent le moins de variations en précision des solutions directes pour différentes valeurs de conductivité du crâne. En termes de solutions inverses pour un seul dipôle, les approches conventionnelle et réciproque sont de précision semblable. Les erreurs de localisation sont petites, même pour des dipôles très excentriques qui produisent des grandes erreurs dans les potentiels du cuir chevelu, à cause de la nature non linéaire des solutions inverses pour un dipôle. Les deux approches se sont démontrées également robustes aux erreurs de conductivité du crâne quand du bruit est présent. Finalement, un modèle plus réaliste de la tête est obtenu en utilisant des images par resonace magnétique (IRM) à partir desquelles les surfaces du cuir chevelu, du crâne et du cerveau/liquide céphalorachidien (LCR) sont extraites. Les deux approches sont validées sur ce type de modèle en utilisant des véritables potentiels évoqués somatosensoriels enregistrés à la suite de stimulation du nerf médian chez des sujets sains. La précision des solutions inverses pour les approches conventionnelle et réciproque et leurs variantes, en les comparant à des sites anatomiques connus sur IRM, est encore une fois évaluée pour les deux conductivités différentes du crâne. Leurs avantages et inconvénients incluant leurs exigences informatiques sont également évalués. Encore une fois, les approches conventionnelle et réciproque produisent des petites erreurs de position dipolaire. En effet, les erreurs de position pour des solutions inverses à un seul dipôle sont robustes de manière inhérente au manque de précision dans les solutions directes, mais dépendent de l’activité superposée d’autres sources neurales. Contrairement aux attentes, les approches réciproques n’améliorent pas la précision des positions dipolaires comparativement aux approches conventionnelles. Cependant, des exigences informatiques réduites en temps et en espace sont les avantages principaux des approches réciproques. Ce type de localisation est potentiellement utile dans la planification d’interventions neurochirurgicales, par exemple, chez des patients souffrant d’épilepsie focale réfractaire qui ont souvent déjà fait un EEG et IRM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Un objectif principal du génie logiciel est de pouvoir produire des logiciels complexes, de grande taille et fiables en un temps raisonnable. La technologie orientée objet (OO) a fourni de bons concepts et des techniques de modélisation et de programmation qui ont permis de développer des applications complexes tant dans le monde académique que dans le monde industriel. Cette expérience a cependant permis de découvrir les faiblesses du paradigme objet (par exemples, la dispersion de code et le problème de traçabilité). La programmation orientée aspect (OA) apporte une solution simple aux limitations de la programmation OO, telle que le problème des préoccupations transversales. Ces préoccupations transversales se traduisent par la dispersion du même code dans plusieurs modules du système ou l’emmêlement de plusieurs morceaux de code dans un même module. Cette nouvelle méthode de programmer permet d’implémenter chaque problématique indépendamment des autres, puis de les assembler selon des règles bien définies. La programmation OA promet donc une meilleure productivité, une meilleure réutilisation du code et une meilleure adaptation du code aux changements. Très vite, cette nouvelle façon de faire s’est vue s’étendre sur tout le processus de développement de logiciel en ayant pour but de préserver la modularité et la traçabilité, qui sont deux propriétés importantes des logiciels de bonne qualité. Cependant, la technologie OA présente de nombreux défis. Le raisonnement, la spécification, et la vérification des programmes OA présentent des difficultés d’autant plus que ces programmes évoluent dans le temps. Par conséquent, le raisonnement modulaire de ces programmes est requis sinon ils nécessiteraient d’être réexaminés au complet chaque fois qu’un composant est changé ou ajouté. Il est cependant bien connu dans la littérature que le raisonnement modulaire sur les programmes OA est difficile vu que les aspects appliqués changent souvent le comportement de leurs composantes de base [47]. Ces mêmes difficultés sont présentes au niveau des phases de spécification et de vérification du processus de développement des logiciels. Au meilleur de nos connaissances, la spécification modulaire et la vérification modulaire sont faiblement couvertes et constituent un champ de recherche très intéressant. De même, les interactions entre aspects est un sérieux problème dans la communauté des aspects. Pour faire face à ces problèmes, nous avons choisi d’utiliser la théorie des catégories et les techniques des spécifications algébriques. Pour apporter une solution aux problèmes ci-dessus cités, nous avons utilisé les travaux de Wiels [110] et d’autres contributions telles que celles décrites dans le livre [25]. Nous supposons que le système en développement est déjà décomposé en aspects et classes. La première contribution de notre thèse est l’extension des techniques des spécifications algébriques à la notion d’aspect. Deuxièmement, nous avons défini une logique, LA , qui est utilisée dans le corps des spécifications pour décrire le comportement de ces composantes. La troisième contribution consiste en la définition de l’opérateur de tissage qui correspond à la relation d’interconnexion entre les modules d’aspect et les modules de classe. La quatrième contribution concerne le développement d’un mécanisme de prévention qui permet de prévenir les interactions indésirables dans les systèmes orientés aspect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cette thèse a pour but d’améliorer l’automatisation dans l’ingénierie dirigée par les modèles (MDE pour Model Driven Engineering). MDE est un paradigme qui promet de réduire la complexité du logiciel par l’utilisation intensive de modèles et des transformations automatiques entre modèles (TM). D’une façon simplifiée, dans la vision du MDE, les spécialistes utilisent plusieurs modèles pour représenter un logiciel, et ils produisent le code source en transformant automatiquement ces modèles. Conséquemment, l’automatisation est un facteur clé et un principe fondateur de MDE. En plus des TM, d’autres activités ont besoin d’automatisation, e.g. la définition des langages de modélisation et la migration de logiciels. Dans ce contexte, la contribution principale de cette thèse est de proposer une approche générale pour améliorer l’automatisation du MDE. Notre approche est basée sur la recherche méta-heuristique guidée par les exemples. Nous appliquons cette approche sur deux problèmes importants de MDE, (1) la transformation des modèles et (2) la définition précise de langages de modélisation. Pour le premier problème, nous distinguons entre la transformation dans le contexte de la migration et les transformations générales entre modèles. Dans le cas de la migration, nous proposons une méthode de regroupement logiciel (Software Clustering) basée sur une méta-heuristique guidée par des exemples de regroupement. De la même façon, pour les transformations générales, nous apprenons des transformations entre modèles en utilisant un algorithme de programmation génétique qui s’inspire des exemples des transformations passées. Pour la définition précise de langages de modélisation, nous proposons une méthode basée sur une recherche méta-heuristique, qui dérive des règles de bonne formation pour les méta-modèles, avec l’objectif de bien discriminer entre modèles valides et invalides. Les études empiriques que nous avons menées, montrent que les approches proposées obtiennent des bons résultats tant quantitatifs que qualitatifs. Ceux-ci nous permettent de conclure que l’amélioration de l’automatisation du MDE en utilisant des méthodes de recherche méta-heuristique et des exemples peut contribuer à l’adoption plus large de MDE dans l’industrie à là venir.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’ingénierie dirigée par les modèles (IDM) est un paradigme d’ingénierie du logiciel bien établi, qui préconise l’utilisation de modèles comme artéfacts de premier ordre dans les activités de développement et de maintenance du logiciel. La manipulation de plusieurs modèles durant le cycle de vie du logiciel motive l’usage de transformations de modèles (TM) afin d’automatiser les opérations de génération et de mise à jour des modèles lorsque cela est possible. L’écriture de transformations de modèles demeure cependant une tâche ardue, qui requiert à la fois beaucoup de connaissances et d’efforts, remettant ainsi en question les avantages apportés par l’IDM. Afin de faire face à cette problématique, de nombreux travaux de recherche se sont intéressés à l’automatisation des TM. L’apprentissage de transformations de modèles par l’exemple (TMPE) constitue, à cet égard, une approche prometteuse. La TMPE a pour objectif d’apprendre des programmes de transformation de modèles à partir d’un ensemble de paires de modèles sources et cibles fournis en guise d’exemples. Dans ce travail, nous proposons un processus d’apprentissage de transformations de modèles par l’exemple. Ce dernier vise à apprendre des transformations de modèles complexes en s’attaquant à trois exigences constatées, à savoir, l’exploration du contexte dans le modèle source, la vérification de valeurs d’attributs sources et la dérivation d’attributs cibles complexes. Nous validons notre approche de manière expérimentale sur 7 cas de transformations de modèles. Trois des sept transformations apprises permettent d’obtenir des modèles cibles parfaits. De plus, une précision et un rappel supérieurs à 90% sont enregistrés au niveau des modèles cibles obtenus par les quatre transformations restantes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Zeolite encapsulated transition metal complexes have received wide attention as an effective heterogenized system that combines the tremendous activity of the metal complexes and the attractive features of the zeolite structure. Zeolite encapsulated complexes offer a bright future for attempts to replace homogeneous systems retaining its catalytic activity and minimizing the technical problems. especially for the partial oxidation of organic compounds. Studies on some zeolite encapsulated transition metal complexes are presented in this thesis. The ligands selected are technically important in a bio-mimetic or structural perspective. Attempts have been made in this study to investigate the composition, structure and stability of encapsulated complexes using available techniques. The catalytic activity of encapsulated complexes was evaluated for the oxidation of some organic compounds. The recycling ability of the catalyst as a result of the encapsulation was also studied.Our studies on Cu-Cr/Al2O3, a typical metal oxide catalyst. illustrate the use of design techniques to modify the properties of such conventional catalysts. The catalytic activity of this catalyst for the oxidation of carbon monoxide was measured. The effect of additives like Ce02 or Ti02 on the activity and stability of this system was also investigated. The additive is potent to improve the activity and stability ofthe catalyst so as to be more effective in commercial usage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The constructional activities in the coastal belt of our country often demand deep foundations because of the poor engineering properties and the related problems arising from weak soil at shallow depths.The soil profile in coastal area often consists of very loose sandy soils extending to a depth of 3 to 4 m from the ground level underlain by clayey soils of medium consistency.The very low shearing resistance of the foundation bed causes local as well as punching shear failure.Hence structures built on these soils may suffer from excessive settlements.This type of soil profile is very common in coastal areas of Kerala,especially in Cochin. Further,the high water table and limited depth of the top sandy layer in these areas restrict the depth of foundation thereby further reducing the safe bearing capacity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Faculty of Engineering. Cochin University of Science and Technology

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Timely detection of sudden change in dynamics that adversely affect the performance of systems and quality of products has great scientific relevance. This work focuses on effective detection of dynamical changes of real time signals from mechanical as well as biological systems using a fast and robust technique of permutation entropy (PE). The results are used in detecting chatter onset in machine turning and identifying vocal disorders from speech signal.Permutation Entropy is a nonlinear complexity measure which can efficiently distinguish regular and complex nature of any signal and extract information about the change in dynamics of the process by indicating sudden change in its value. Here we propose the use of permutation entropy (PE), to detect the dynamical changes in two non linear processes, turning under mechanical system and speech under biological system.Effectiveness of PE in detecting the change in dynamics in turning process from the time series generated with samples of audio and current signals is studied. Experiments are carried out on a lathe machine for sudden increase in depth of cut and continuous increase in depth of cut on mild steel work pieces keeping the speed and feed rate constant. The results are applied to detect chatter onset in machining. These results are verified using frequency spectra of the signals and the non linear measure, normalized coarse-grained information rate (NCIR).PE analysis is carried out to investigate the variation in surface texture caused by chatter on the machined work piece. Statistical parameter from the optical grey level intensity histogram of laser speckle pattern recorded using a charge coupled device (CCD) camera is used to generate the time series required for PE analysis. Standard optical roughness parameter is used to confirm the results.Application of PE in identifying the vocal disorders is studied from speech signal recorded using microphone. Here analysis is carried out using speech signals of subjects with different pathological conditions and normal subjects, and the results are used for identifying vocal disorders. Standard linear technique of FFT is used to substantiate thc results.The results of PE analysis in all three cases clearly indicate that this complexity measure is sensitive to change in regularity of a signal and hence can suitably be used for detection of dynamical changes in real world systems. This work establishes the application of the simple, inexpensive and fast algorithm of PE for the benefit of advanced manufacturing process as well as clinical diagnosis in vocal disorders.