961 resultados para Structural design code
Resumo:
Phosphoserine aminotrasferase (PSAT: EC 2.6.1.52) is a vitamin B6-dependent enzyme and a member of the subgroup IV in the aminotransferase superfamily. Here, X-ray crystallography was used to determine the structure of PSAT from Bacillus alcalophilus with pyridoxamine 5′-phosphate (PMP) at high resolution (1.57 Å). In addition, analysis of active residues and their conformational changes was performed. The structure is of good quality as indicated, for example, by the last recorded Rwork and Rfree numbers (0.1331 and 0.1495, respectively). The enzyme was initially crystallized in the presence of substrate L-glutamate with the idea to produce the enzyme-substrate complex. However, the structure determination revealed no glutamate bound at the active site. Instead, the Schiff base between Lys196 and PLP appeared broken, resulting in the formation of PMP owing to the excess of the donor substrate used during co-crystallization. Structural comparison with the free PSAT enzyme and the PSAR-PSER complex showed that the aromatic ring of the co-factor remains in almost the same place in all structures. A flexible nearby loop in the active site was found in the same position as in the free PSAT structure while in the PSAT-PSER structure it moves inwards to interact with PSER. B-factors comparison in all three structures (PSAT-PMP complex, free PSAT, and PSAT-PSER complex) showed elevated loop flexibility in the absence of the substrate, indicating that loop flexibility plays an important role during substrate binding. The reported structure provides mechanistic details into the reaction mechanism of PSAT and may help in understanding better the role of various parts in the structure towards the design of novel compounds as potential disruptors of PSAT function. This may lead to the development of new drugs which could target the human and bacterial PSAT active site.
Resumo:
The overall objective of the thesis is to design a robot chassis frame which is a bearing structure of a vehicle supporting all mechanical components and providing structure and stability. Various techniques and scientific principles were used to design a chassis frame.Design principles were applied throughout the process. By using Solid-Works software,virtual models was made for chassis frame. Chassis frame of overall dimension 1597* 800*950 mm3 was designed. Center of mass lieson 1/3 of the length from front wheel at height 338mm in the symmetry plane. Overall weight of the chassis frame is 80.12kg. Manufacturing drawing is also provided. Additionally,structural analysis was done in FEMAP which gives the busting result for chassis design by taking into consideration stress and deflection on different kind of loading resembling real life case. On the basis of simulated result, selected material was verified. Resulting design is expected to perform its intended function without failure. As a suggestion for further research, additional fatigue analysis and proper dynamic analysis can be conducted to make the study more robust.
Resumo:
The vast majority of our contemporary society owns a mobile phone, which has resulted in a dramatic rise in the amount of networked computers in recent years. Security issues in the computers have followed the same trend and nearly everyone is now affected by such issues. How could the situation be improved? For software engineers, an obvious answer is to build computer software with security in mind. A problem with building software with security is how to define secure software or how to measure security. This thesis divides the problem into three research questions. First, how can we measure the security of software? Second, what types of tools are available for measuring security? And finally, what do these tools reveal about the security of software? Measuring tools of these kind are commonly called metrics. This thesis is focused on the perspective of software engineers in the software design phase. Focus on the design phase means that code level semantics or programming language specifics are not discussed in this work. Organizational policy, management issues or software development process are also out of the scope. The first two research problems were studied using a literature review while the third was studied using a case study research. The target of the case study was a Java based email server called Apache James, which had details from its changelog and security issues available and the source code was accessible. The research revealed that there is a consensus in the terminology on software security. Security verification activities are commonly divided into evaluation and assurance. The focus of this work was in assurance, which means to verify one’s own work. There are 34 metrics available for security measurements, of which five are evaluation metrics and 29 are assurance metrics. We found, however, that the general quality of these metrics was not good. Only three metrics in the design category passed the inspection criteria and could be used in the case study. The metrics claim to give quantitative information on the security of the software, but in practice they were limited to evaluating different versions of the same software. Apart from being relative, the metrics were unable to detect security issues or point out problems in the design. Furthermore, interpreting the metrics’ results was difficult. In conclusion, the general state of the software security metrics leaves a lot to be desired. The metrics studied had both theoretical and practical issues, and are not suitable for daily engineering workflows. The metrics studied provided a basis for further research, since they pointed out areas where the security metrics were necessary to improve whether verification of security from the design was desired.
Resumo:
There has been an increase in the interest in service design, as companies have become more customer-centric and their focus has shifted to customer experiences. The actual organisational purchasing of service design has been given little attention, until recent years. The purpose of this study is to explore the purchasing of service design from the perspectives of sellers (service design agencies) and buying clients (business organisations). In order to understand the phenomenon, also agencies and clients’ approaches to service design discipline, purchasing processes, challenges related to purchasing and ways of facilitating the purchasing are explored. The research follows qualitative research method and utilises abductive reasoning. A proposition framework was formed by combining services marketing, design and organisational buying behaviour literatures, and was tested against real-life business cases. Empirical data was gathered by interviewing eight service design agency representatives and five client representatives in Finland. The results of semi-structural interviews were analysed by finding repetitive themes. The proposition framework was updated according to interview findings. There were both similarities and differences in service design agencies and clients’ approaches to service design. Service design represents a strategic activity to both parties, and it helps in clients’ business development and in discovering opportunities. It is an ideology; a way of thinking and working. The driving force for purchasing service design seemed to be something else than service design itself. Projects have been bought for 1) change and innovation related development, 2) channel related development or for 3) customer experience related development. Seven purchasing challenge themes were recognised: 1) poor or differing service design understanding, 2) selling of service design, 3) varying expectations, 4) difficulty of pre-evaluation, 5) buyers and buying companies, 6) project process and nature and 7) unclear project results. These all can be considered to cause challenges in organisational service design purchasing. Challenges can be caused by either participant, the agency or the client, and take place at any point of the purchasing process. Some of the challenges could be considered as barriers to purchasing or they play a role in an unsuccessful service project – and therefore, result in an unsuccessful organisational purchase. Purchasing could be facilitated in various ways by either participant; some ways are more attitude based, others actionable improvements. Thesis’s theoretical and managerial findings can be utilised to both improve the selling and purchasing of service design services.
Resumo:
Human-Centered Design (HCD) is a well-recognized approach to the design of interactive computing systems that supports everyday and professional lives of people. To that end, the HCD approach put central emphasis on the explicit understanding of users and context of use by involving users throughout the entire design and development process. With mobile computing, the diversity of users as well as the variety in the spatial, temporal, and social settings of the context of use has notably expanded, which affect the effort of interaction designers to understand users and context of use. The emergence of the mobile apps era in 2008 as a result of structural changes in the mobile industry and the profound enhanced capabilities of mobile devices, further intensify the embeddedness of technology in the daily life of people and the challenges that interaction designers face to cost-efficiently understand users and context of use. Supporting interaction designers in this challenge requires understanding of their existing practice, rationality, and work environment. The main objective of this dissertation is to contribute to interaction design theories by generating understanding on the HCD practice of mobile systems in the mobile apps era, as well as to explain the rationality of interaction designers in attending to users and context of use. To achieve that, a literature study is carried out, followed by a mixed-methods research that combines multiple qualitative interview studies and a quantitative questionnaire study. The dissertation contributes new insights regarding the evolving HCD practice at an important time of transition from stationary computing to mobile computing. Firstly, a gap is identified between interaction design as practiced in research and in the industry regarding the involvement of users in context; whereas the utilization of field evaluations, i.e. in real-life environments, has become more common in academic projects, interaction designers in the industry still rely, by large, on lab evaluations. Secondly, the findings indicate on new aspects that can explain this gap and the rationality of interaction designers in the industry in attending to users and context; essentially, the professional-client relationship was found to inhibit the involvement of users, while the mental distance between practitioners and users as well as the perceived innovativeness of the designed system are suggested in explaining the inclination to study users in situ. Thirdly, the research contributes the first explanatory model on the relation between the organizational context and HCD; essentially, innovation-focused organizational strategies greatly affect the cost-effective usage of data on users and context of use. Last, the findings suggest a change in the nature of HCD in the mobile apps era, at least with universal consumer systems; evidently, the central attention on the explicit understanding of users and context of use shifts from an early requirements phase and continual activities during design and development to follow-up activities. That is, the main effort to understand users is by collecting data on their actual usage of the system, either before or after the system is deployed. The findings inform both researchers and practitioners in interaction design. In particular, the dissertation suggest on action research as a useful approach to support interaction designers and further inform theories on interaction design. With regard to the interaction design practice, the dissertation highlights strategies that encourage a more cost-effective user- and context-informed interaction design process. With the continual embeddedness of computing into people’s life, e.g. with wearable devices and connected car systems, the dissertation provides a timely and valuable view on the evolving humancentered design.
Resumo:
A new Ultra-High Vacuum (UHV) reflectance spectrometer was successfully designed, making use of a Janis Industries ST-400 sample cryostat, IR Labs bolometer, and Briiker IFS 66 v/S spectrometer. Two of the noteworthy features include an in situ gold evaporator and internal reference path, both of which allow for the experiment to progress with a completely undisturbed sample position. As tested, the system was designed to operate between 4.2 K and 325 K over a frequency range of 60 - 670 cm~^. This frequency range can easily be extended through the addition of appUcable detectors. Tests were performed on SrTiOa, a highly ionic incipient ferroelectric insulator with a well known reflectance. The presence and temperatmre dependence of the lowest frequency "soft" mode were measured, as was the presence of the other two infrared modes. During the structural phase transition from cubic to tetragonal perovskite, the splitting of the second phonon mode was also observed. All of the collected data indicate good agreement with previous measurements, with a minor discrepency between the actual and recorded sample temperatures.
Resumo:
Work in the area of molecule-based magnetic and/or conducting materials is presented in two projects. The first project describes the use of 4,4’-bipyridine as a scaffold for the preparation of a new family of tetracarboxamide ligands. Four new ligands I-III have been prepared and characterized and the coordination chemistry of these ligands is presented. This project was then extended to exploit 4,4’-bipyridine as a covalent linker between two N3O2 macrocyles. In this respect, three dimeric macrocycles have been prepared IV-VI. Substitution of the labile axial ligands of the Co(II) complex IV by [Fe(CN)6]4- afforded the self-assembly of the 1-D polymeric chain {[Co(N3O2)H2O]2Fe(CN)6}n•3H2O that has been structurally and magnetically characterized. Magnetic studies on the Fe(II) complexes V and VI indicate that they undergo incomplete spin crossover transitions in the solid state. Strategies for the preparation of chiral spin crossover N3O2 macrocycles are discussed and the synthesis of the novel chiral Fe(II) macrocyclic complex VII is reported. Magnetic susceptibility and Mössbauer studies reveal that this complex undergoes a gradual spin crossover in the solid state with no thermal hysteresis. Variable temperature X-ray diffraction studies on single crystals of VII reveal interesting structural changes in the coordination geometry of the macrocycle accompanying its SCO transition. The second project reports the synthesis and characterization of a new family of tetrathiafulvalene derivatives VIII – XII, where a heterocyclic chelating ligand is appended to a TTF donor via an imine linker. The coordination chemistries of these ligands with M(hfac)2.H2O (M( = Co, Ni, Mn, Cu) have been explored and the structural and magnetic properties of these complexes are described.
Resumo:
This thesis describes two different approaches for the preparation of polynuclear clusters with interesting structural, magnetic and optical properties. Firstly, exploiting p-tert-butylcalix[4]arene (TBC4) macrocycles together with selected Ln(III) ions for the assembly of emissive single molecule magnets, and secondly the preparation and coordination of a chiral mpmH ligand with selected 3d transition metal ions, working towards the discovery of chiral polynuclear clusters. In Project 1, the coordination chemistry of the TBC4 macrocycle together with Dy(III) and Tb(III) afforded two Ln6[TBC4]2 complexes that have been structurally, magnetically and optically characterized. X-ray diffraction studies reveal that both complexes contain an octahedral core of Ln6 ions capped by two fully deprotonated TBC4 macrocycles. Although the unit cells of the two complexes are very similar, the coordination geometries of their Ln(III) ions are subtly different. Variable temperature ac magnetic susceptibility studies reveal that both complexes display single molecule magnet (SMM) behaviour in zero dc field and the energy barriers and associated pre-exponential factors for each relaxation process have been determined. Low temperature solid state photoluminescence studies reveal that both complexes are emissive; however, the f-f transitions within the Dy6 complex were masked by broad emissions from the TBC4 ligand. In contrast, the Tb(III) complex displayed green emission with the spectrum comprising four sharp bands corresponding to 5D4 → 7FJ transitions (where J = 3, 4, 5 and 6), highlighting that energy transfer from the TBC4 macrocycle to the Tb(III) ion is more effective than to Dy. Examples of zero field Tb(III) SMMs are scarce in the chemical literature and the Tb6[TBC4]2 complex represents the first example of a Tb(III) dual property SMM assembled from a p-tert-butylcalix[4]arene macrocycle with two magnetically derived energy barriers, Ueff of 79 and 63 K. In Project 2, the coordination of both enantiomers of the chiral ligand, α-methyl-2-pyridinemethanol (mpmH) to Ni(II) and Co(II) afforded three polynuclear clusters that have been structurally and magnetically characterized. The first complex, a Ni4 cluster of stoichiometry [Ni4(O2CCMe3)4(mpm)4]·H2O crystallizes in a distorted cubane topology that is well known in Ni(II) cluster chemistry. The final two Co(II) complexes crystallize as a linear mixed valence trimer with stoichiometry [Co3(mpm)6]·(ClO4)2, and a Co4 mixed valence complex [Co(II)¬2Co(III)2(NO3)2(μ-mpm)4(ONO2)2], whose structural topology resembles that of a defective double cubane. All three complexes crystallize in chiral space groups and circular dichroism experiments further confirm that the chirality of the ligand has been transferred to the respective coordination complex. Magnetic susceptibility studies reveal that for all three complexes, there are competing ferro- and antiferromagnetic exchange interactions. The [Co(II)¬2Co(III)2(NO3)2(μ-mpm)4(ONO2)2] complex represents the first example of a chiral mixed valence Co4 cluster with a defective double cubane topology.
Resumo:
Traditionnellement, les applications orientées objets légataires intègrent différents aspects fonctionnels. Ces aspects peuvent être dispersés partout dans le code. Il existe différents types d’aspects : • des aspects qui représentent des fonctionnalités métiers ; • des aspects qui répondent à des exigences non fonctionnelles ou à d’autres considérations de conception comme la robustesse, la distribution, la sécurité, etc. Généralement, le code qui représente ces aspects chevauche plusieurs hiérarchies de classes. Plusieurs chercheurs se sont intéressés à la problématique de la modularisation de ces aspects dans le code : programmation orientée sujets, programmation orientée aspects et programmation orientée vues. Toutes ces méthodes proposent des techniques et des outils pour concevoir des applications orientées objets sous forme de composition de fragments de code qui répondent à différents aspects. La séparation des aspects dans le code a des avantages au niveau de la réutilisation et de la maintenance. Ainsi, il est important d’identifier et de localiser ces aspects dans du code légataire orienté objets. Nous nous intéressons particulièrement aux aspects fonctionnels. En supposant que le code qui répond à un aspect fonctionnel ou fonctionnalité exhibe une certaine cohésion fonctionnelle (dépendances entre les éléments), nous proposons d’identifier de telles fonctionnalités à partir du code. L’idée est d’identifier, en l’absence des paradigmes de la programmation par aspects, les techniques qui permettent l’implémentation des différents aspects fonctionnels dans un code objet. Notre approche consiste à : • identifier les techniques utilisées par les développeurs pour intégrer une fonctionnalité en l’absence des techniques orientées aspects • caractériser l’empreinte de ces techniques sur le code • et développer des outils pour identifier ces empreintes. Ainsi, nous présentons deux approches pour l’identification des fonctionnalités existantes dans du code orienté objets. La première identifie différents patrons de conception qui permettent l’intégration de ces fonctionnalités dans le code. La deuxième utilise l’analyse formelle de concepts pour identifier les fonctionnalités récurrentes dans le code. Nous expérimentons nos deux approches sur des systèmes libres orientés objets pour identifier les différentes fonctionnalités dans le code. Les résultats obtenus montrent l’efficacité de nos approches pour identifier les différentes fonctionnalités dans du code légataire orienté objets et permettent de suggérer des cas de refactorisation.
Resumo:
Les changements sont faits de façon continue dans le code source des logiciels pour prendre en compte les besoins des clients et corriger les fautes. Les changements continus peuvent conduire aux défauts de code et de conception. Les défauts de conception sont des mauvaises solutions à des problèmes récurrents de conception ou d’implémentation, généralement dans le développement orienté objet. Au cours des activités de compréhension et de changement et en raison du temps d’accès au marché, du manque de compréhension, et de leur expérience, les développeurs ne peuvent pas toujours suivre les normes de conception et les techniques de codage comme les patrons de conception. Par conséquent, ils introduisent des défauts de conception dans leurs systèmes. Dans la littérature, plusieurs auteurs ont fait valoir que les défauts de conception rendent les systèmes orientés objet plus difficile à comprendre, plus sujets aux fautes, et plus difficiles à changer que les systèmes sans les défauts de conception. Pourtant, seulement quelques-uns de ces auteurs ont fait une étude empirique sur l’impact des défauts de conception sur la compréhension et aucun d’entre eux n’a étudié l’impact des défauts de conception sur l’effort des développeurs pour corriger les fautes. Dans cette thèse, nous proposons trois principales contributions. La première contribution est une étude empirique pour apporter des preuves de l’impact des défauts de conception sur la compréhension et le changement. Nous concevons et effectuons deux expériences avec 59 sujets, afin d’évaluer l’impact de la composition de deux occurrences de Blob ou deux occurrences de spaghetti code sur la performance des développeurs effectuant des tâches de compréhension et de changement. Nous mesurons la performance des développeurs en utilisant: (1) l’indice de charge de travail de la NASA pour leurs efforts, (2) le temps qu’ils ont passé dans l’accomplissement de leurs tâches, et (3) les pourcentages de bonnes réponses. Les résultats des deux expériences ont montré que deux occurrences de Blob ou de spaghetti code sont un obstacle significatif pour la performance des développeurs lors de tâches de compréhension et de changement. Les résultats obtenus justifient les recherches antérieures sur la spécification et la détection des défauts de conception. Les équipes de développement de logiciels doivent mettre en garde les développeurs contre le nombre élevé d’occurrences de défauts de conception et recommander des refactorisations à chaque étape du processus de développement pour supprimer ces défauts de conception quand c’est possible. Dans la deuxième contribution, nous étudions la relation entre les défauts de conception et les fautes. Nous étudions l’impact de la présence des défauts de conception sur l’effort nécessaire pour corriger les fautes. Nous mesurons l’effort pour corriger les fautes à l’aide de trois indicateurs: (1) la durée de la période de correction, (2) le nombre de champs et méthodes touchés par la correction des fautes et (3) l’entropie des corrections de fautes dans le code-source. Nous menons une étude empirique avec 12 défauts de conception détectés dans 54 versions de quatre systèmes: ArgoUML, Eclipse, Mylyn, et Rhino. Nos résultats ont montré que la durée de la période de correction est plus longue pour les fautes impliquant des classes avec des défauts de conception. En outre, la correction des fautes dans les classes avec des défauts de conception fait changer plus de fichiers, plus les champs et des méthodes. Nous avons également observé que, après la correction d’une faute, le nombre d’occurrences de défauts de conception dans les classes impliquées dans la correction de la faute diminue. Comprendre l’impact des défauts de conception sur l’effort des développeurs pour corriger les fautes est important afin d’aider les équipes de développement pour mieux évaluer et prévoir l’impact de leurs décisions de conception et donc canaliser leurs efforts pour améliorer la qualité de leurs systèmes. Les équipes de développement doivent contrôler et supprimer les défauts de conception de leurs systèmes car ils sont susceptibles d’augmenter les efforts de changement. La troisième contribution concerne la détection des défauts de conception. Pendant les activités de maintenance, il est important de disposer d’un outil capable de détecter les défauts de conception de façon incrémentale et itérative. Ce processus de détection incrémentale et itérative pourrait réduire les coûts, les efforts et les ressources en permettant aux praticiens d’identifier et de prendre en compte les occurrences de défauts de conception comme ils les trouvent lors de la compréhension et des changements. Les chercheurs ont proposé des approches pour détecter les occurrences de défauts de conception, mais ces approches ont actuellement quatre limites: (1) elles nécessitent une connaissance approfondie des défauts de conception, (2) elles ont une précision et un rappel limités, (3) elles ne sont pas itératives et incrémentales et (4) elles ne peuvent pas être appliquées sur des sous-ensembles de systèmes. Pour surmonter ces limitations, nous introduisons SMURF, une nouvelle approche pour détecter les défauts de conception, basé sur une technique d’apprentissage automatique — machines à vecteur de support — et prenant en compte les retours des praticiens. Grâce à une étude empirique portant sur trois systèmes et quatre défauts de conception, nous avons montré que la précision et le rappel de SMURF sont supérieurs à ceux de DETEX et BDTEX lors de la détection des occurrences de défauts de conception. Nous avons également montré que SMURF peut être appliqué à la fois dans les configurations intra-système et inter-système. Enfin, nous avons montré que la précision et le rappel de SMURF sont améliorés quand on prend en compte les retours des praticiens.
Resumo:
Ce document s’inscrit dans la foulée des préoccupations mondiales sur le devenir des villes au XXIe siècle. Il questionne les façons de faire qui contribuent à un développement de qualité des cadres de vie des citoyens. Les processus d’idéation de type atelier et charrette sont retenus en regard de leurs valeurs mobilisatrices et consensuelles qui répondent aux principes du développement durable. La problématique posée concerne l’adaptation de leur structure de fonctionnement au contexte local dans lequel il s’applique et de leur performance à induire les résultats escomptés. Une analyse comparative de trois études de cas révèle que le processus d’idéation se singularise en fonction des modalités de communication nécessaires pour progresser dans la démarche de planification des projets et conjointement à ceci, confirme que leur performance réside en leur capacité de rassembler l’ensemble des acteurs du projet en un même lieu. À l’issue de notre étude, nous fournissons un procédurier préliminaire pour diriger la mise en œuvre de processus d’idéation localement.
Resumo:
This proposed thesis is entitled “Plasma Polymerised Organic Thin Films: A study on the Structural, Electrical, and Nonlinear Optical Properties for Possible Applications. Polymers and polymer based materials find enormous applications in the realm of electronics and optoelectronics. They are employed as both active and passive components in making various devices. Enormous research activities are going on in this area for the last three decades or so, and many useful contributions are made quite accidentally. Conducting polymers is such a discovery, and eversince the discovery of conducting polyacetylene, a new branch of science itself has emerged in the form of synthetic metals. Conducting polymers are useful materials for many applications like polymer displays, high density data storage, polymer FETs, polymer LEDs, photo voltaic devices and electrochemical cells. With the emergence of molecular electronics and its potential in finding useful applications, organic thin films are receiving an unusual attention by scientists and engineers alike. This is evident from the vast literature pertaining to this field appearing in various journals. Recently, computer aided design of organic molecules have added further impetus to the ongoing research activities in this area. Polymers, especially, conducting polymers can be prepared both in the bulk and in the thinfilm form. However, many applications necessitate that they are grown in the thin film form either as free standing or on appropriate substrates. As far as their bulk counterparts are concerned, they can be prepared by various polymerisation techniques such as chemical routes and electrochemical means. A survey of the literature reveals that polymers like polyaniline, polypyrrole, polythiophene, have been investigated with a view to studying their structural electrical and optical properties. Among the various alternate techniques employed for the preparation of polymer thin films, the method of plasma polymerisation needs special attention in this context. The technique of plasma polymerisation is an inexpensive method and often requires very less infra structure. This method includes the employment of ac, rf, dc, microwave and pulsed sources. They produce pinhole free homogeneous films on appropriate substrates under controlled conditions. In conventional plasma polymerisation set up, the monomer is fed into an evacuated chamber and an ac/rf/dc/ w/pulsed discharge is created which enables the monomer species to dissociate, leading to the formation of polymer thin films. However, it has been found that the structure and hence the properties exhibited by plasma polymerized thin films are quite different from that of their counterparts produced by other thin film preparation techniques such as electrochemical deposition or spin coating. The properties of these thin films can be tuned only if the interrelationship between the structure and other properties are understood from a fundamental point of view. So very often, a through evaluation of the various properties is a pre-requisite for tailoring the properties of the thin films for applications. It has been found that conjugation is a necessary condition for enhancing the conductivity of polymer thin films. RF technique of plasma polymerisation is an excellent tool to induce conjugation and this modifies the electrical properties too. Both oxidative and reductive doping can be employed to modify the electrical properties of the polymer thin films for various applications. This is where organic thin films based on polymers scored over inorganic thin films, where in large area devices can be fabricated with organic semiconductors which is difficult to achieve by inorganic materials. For such applications, a variety of polymers have been synthesized such as polyaniline, polythiophene, polypyrrole etc. There are newer polymers added to this family every now and then. There are many virgin areas where plasma polymers are yet to make a foray namely low-k dielectrics or as potential nonlinear optical materials such as optical limiters. There are also many materials which are not been prepared by the method of plasma polymerisation. Some of the materials which are not been dealt with are phenyl hydrazine and tea tree oil. The advantage of employing organic extracts like tea tree oil monomers as precursors for making plasma polymers is that there can be value addition to the already existing uses and possibility exists in converting them to electronic grade materials, especially semiconductors and optically active materials for photonic applications. One of the major motivations of this study is to synthesize plasma polymer thin films based on aniline, phenyl hydrazine, pyrrole, tea tree oil and eucalyptus oil by employing both rf and ac plasma polymerisation techniques. This will be carried out with the objective of growing thin films on various substrates such as glass, quartz and indium tin oxide (ITO) coated glass. There are various properties namely structural, electrical, dielectric permittivity, nonlinear optical properties which are to be evaluated to establish the relationship with the structure and the other properties. Special emphasis will be laid in evaluating the optical parameters like refractive index (n), extinction coefficient (k), the real and imaginary components of dielectric constant and the optical transition energies of the polymer thin films from the spectroscopic ellipsometric studies. Apart from evaluating these physical constants, it is also possible to predict whether a material exhibit nonlinear optical properties by ellipsometric investigations. So further studies using open aperture z-scan technique in order to evaluate the nonlinear optical properties of a few selected samples which are potential nonlinear optical materials is another objective of the present study. It will be another endeavour to offer an appropriate explanation for the nonlinear optical properties displayed by these films. Doping of plasma polymers is found to modify both the electrical conductivity and optical properties. Iodine is found to modify the properties of the polymer thin films. However insitu iodine doping is tricky and the film often looses its stability because of the escape of iodine. An appropriate insitu technique of doping will be developed to dope iodine in to the plasma polymerized thin films. Doping of polymer thin films with iodine results in improved and modified optical and electrical properties. However it requires tools like FTIR and UV-Vis-NIR spectroscopy to elucidate the structural and optical modifications imparted to the polymer films. This will be attempted here to establish the role of iodine in the modification of the properties exhibited by the films
Resumo:
Internet today has become a vital part of day to day life, owing to the revolutionary changes it has brought about in various fields. Dependence on the Internet as an information highway and knowledge bank is exponentially increasing so that a going back is beyond imagination. Transfer of critical information is also being carried out through the Internet. This widespread use of the Internet coupled with the tremendous growth in e-commerce and m-commerce has created a vital need for infonnation security.Internet has also become an active field of crackers and intruders. The whole development in this area can become null and void if fool-proof security of the data is not ensured without a chance of being adulterated. It is, hence a challenge before the professional community to develop systems to ensure security of the data sent through the Internet.Stream ciphers, hash functions and message authentication codes play vital roles in providing security services like confidentiality, integrity and authentication of the data sent through the Internet. There are several ·such popular and dependable techniques, which have been in use widely, for quite a long time. This long term exposure makes them vulnerable to successful or near successful attempts for attacks. Hence it is the need of the hour to develop new algorithms with better security.Hence studies were conducted on various types of algorithms being used in this area. Focus was given to identify the properties imparting security at this stage. By making use of a perception derived from these studies, new algorithms were designed. Performances of these algorithms were then studied followed by necessary modifications to yield an improved system consisting of a new stream cipher algorithm MAJE4, a new hash code JERIM- 320 and a new message authentication code MACJER-320. Detailed analysis and comparison with the existing popular schemes were also carried out to establish the security levels.The Secure Socket Layer (SSL) I Transport Layer Security (TLS) protocol is one of the most widely used security protocols in Internet. The cryptographic algorithms RC4 and HMAC have been in use for achieving security services like confidentiality and authentication in the SSL I TLS. But recent attacks on RC4 and HMAC have raised questions about the reliability of these algorithms. Hence MAJE4 and MACJER-320 have been proposed as substitutes for them. Detailed studies on the performance of these new algorithms were carried out; it has been observed that they are dependable alternatives.
Resumo:
The study envisaged herein contains the numerical investigations on Perforated Plate (PP) as well as numerical and experimental investigations on Perforated Plate with Lining (PPL) which has a variety of applications in underwater engineering especially related to defence applications. Finite element method has been adopted as the tool for analysis of PP and PPL. The commercial software ANSYS has been used for static and free vibration response evaluation, whereas ANSYS LS-DYNA has been used for shock analysis. SHELL63, SHELL93, SOLID45, SOLSH190, BEAM188 and FLUID30 finite elements available in the ANSYS library as well as SHELL193 and SOLID194 available in the ANSYS LS-DYNA library have been made use of. Unit cell of the PP and PPL which is a miniature of the original plate with 16 perforations have been used. Based upon the convergence characteristics, the utility of SHELL63 element for the analysis of PP and PPL, and the required mesh density are brought out. The effect of perforation, geometry and orientation of perforation, boundary conditions and lining plate are investigated for various configurations. Stress concentration and deflection factor are also studied. Based on these investigations, stadium geometry perforation with horizontal orientation is recommended for further analysis.Linear and nonlinear static analysis of PP and PPL subjected to unit normal pressure has been carried out besides the free vibration analysis. Shock analysis has also been carried out on these structural components. The analytical model measures 0.9m x 0.9m with stiffener of 0.3m interval. The influence of finite element, boundary conditions, and lining plate on linear static response has been estimated and presented. Comparison of behavior of PP and PPL in the nonlinear strain regime has been made using geometric nonlinear analysis. Free vibration analysis of the PP and PPL has been carried out ‘in vacuum’ condition and in water backed condition, and the influence of water backed condition and effect of perforation on natural frequency have been investigated.Based upon the studies on the vibration characteristics of NPP, PP and PPL in water backed condition and ‘in vacuum’ condition, the reduction in the natural frequency of the plate in immersed condition has been rightly brought out. The necessity to introduce the effect of water medium in the analysis of water backed underwater structure has been highlighted.Shock analysis of PP and PPL for three explosives viz., PEK, TNT and C4 has been carried out and deflection and stresses on plate as well as free field pressure have been estimated using ANSYS LS-DYNA. The effect of perforations and the effect of lining plate have been predicted. Experimental investigations of the measurement of free field pressure using PPL have been conducted in a shock tank. Free field pressure has been measured and has been validated with finite element analysis results. Besides, an experiment has been carried out on PPL, for the comparison of the static deflection predicted by finite element analysis.The distribution of the free field pressure and the estimation of differential pressure from experimentation and the provision for treating the differential pressure as the resistance, as a part of the design load for PPL, has been brought out.
Resumo:
Gabion faced re.taining walls are essentially semi rigid structures that can generally accommodate large lateral and vertical movements without excessive structural distress. Because of this inherent feature, they offer technical and economical advantage over the conventional concrete gravity retaining walls. Although they can be constructed either as gravity type or reinforced soil type, this work mainly deals with gabion faced reinforced earth walls as they are more suitable to larger heights. The main focus of the present investigation was the development of a viable plane strain two dimensional non linear finite element analysis code which can predict the stress - strain behaviour of gabion faced retaining walls - both gravity type and reinforced soil type. The gabion facing, backfill soil, In - situ soil and foundation soil were modelled using 20 four noded isoparametric quadrilateral elements. The confinement provided by the gabion boxes was converted into an induced apparent cohesion as per the membrane correction theory proposed by Henkel and Gilbert (1952). The mesh reinforcement was modelled using 20 two noded linear truss elements. The interactions between the soil and the mesh reinforcement as well as the facing and backfill were modelled using 20 four noded zero thickness line interface elements (Desai et al., 1974) by incorporating the nonlinear hyperbolic formulation for the tangential shear stiffness. The well known hyperbolic formulation by Ouncan and Chang (1970) was used for modelling the non - linearity of the soil matrix. The failure of soil matrix, gabion facing and the interfaces were modelled using Mohr - Coulomb failure criterion. The construction stages were also modelled.Experimental investigations were conducted on small scale model walls (both in field as well as in laboratory) to suggest an alternative fill material for the gabion faced retaining walls. The same were also used to validate the finite element programme developed as a part of the study. The studies were conducted using different types of gabion fill materials. The variation was achieved by placing coarse aggregate and quarry dust in different proportions as layers one above the other or they were mixed together in the required proportions. The deformation of the wall face was measured and the behaviour of the walls with the variation of fill materials was analysed. It was seen that 25% of the fill material in gabions can be replaced by a soft material (any locally available material) without affecting the deformation behaviour to large extents. In circumstances where deformation can be allowed to some extents, even up to 50% replacement with soft material can be possible.The developed finite element code was validated using experimental test results and other published results. Encouraged by the close comparison between the theory and experiments, an extensive and systematic parametric study was conducted, in order to gain a closer understanding of the behaviour of the system. Geometric parameters as well as material parameters were varied to understand their effect on the behaviour of the walls. The final phase of the study consisted of developing a simplified method for the design of gabion faced retaining walls. The design was based on the limit state method considering both the stability and deformation criteria. The design parameters were selected for the system and converted to dimensionless parameters. Thus the procedure for fixing the dimensions of the wall was simplified by eliminating the conventional trial and error procedure. Handy design charts were developed which would prove as a hands - on - tool to the design engineers at site. Economic studies were also conducted to prove the cost effectiveness of the structures with respect to the conventional RCC gravity walls and cost prediction models and cost breakdown ratios were proposed. The studies as a whole are expected to contribute substantially to understand the actual behaviour of gabion faced retaining wall systems with particular reference to the lateral deformations.