933 resultados para Low Autocorrelation Binary Sequence Problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The familial acute myeloid leukemia related factor gene (FAMLF) was previously identified from a familial AML subtractive cDNA library and shown to undergo alternative splicing. This study used real-time quantitative PCR to investigate the expression of the FAMLF alternative-splicing transcript consensus sequence (FAMLF-CS) in peripheral blood mononuclear cells (PBMCs) from 119 patients with de novo acute leukemia (AL) and 104 healthy controls, as well as in CD34+cells from 12 AL patients and 10 healthy donors. A 429-bp fragment from a novel splicing variant of FAMLF was obtained, and a 363-bp consensus sequence was targeted to quantify total FAMLF expression. Kruskal-Wallis, Nemenyi, Spearman's correlation, and Mann-Whitney U-tests were used to analyze the data. FAMLF-CS expression in PBMCs from AL patients and CD34+ cells from AL patients and controls was significantly higher than in control PBMCs (P<0.0001). Moreover,FAMLF-CS expression in PBMCs from the AML group was positively correlated with red blood cell count (rs=0.317, P=0.006), hemoglobin levels (rs=0.210, P=0.049), and percentage of peripheral blood blasts (rs=0.256, P=0.027), but inversely correlated with hemoglobin levels in the control group (rs=–0.391, P<0.0001). AML patients with high CD34+ expression showed significantly higherFAMLF-CS expression than those with low CD34+ expression (P=0.041). Our results showed thatFAMLF is highly expressed in both normal and malignant immature hematopoietic cells, but that expression is lower in normal mature PBMCs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low-level lasers are used at low power densities and doses according to clinical protocols supplied with laser devices or based on professional practice. Although use of these lasers is increasing in many countries, the molecular mechanisms involved in effects of low-level lasers, mainly on DNA, are controversial. In this study, we evaluated the effects of low-level red lasers on survival, filamentation, and morphology of Escherichia colicells that were exposed to ultraviolet C (UVC) radiation. Exponential and stationary wild-type and uvrA-deficientE. coli cells were exposed to a low-level red laser and in sequence to UVC radiation. Bacterial survival was evaluated to determine the laser protection factor (ratio between the number of viable cells after exposure to the red laser and UVC and the number of viable cells after exposure to UVC). Bacterial filaments were counted to obtain the percentage of filamentation. Area-perimeter ratios were calculated for evaluation of cellular morphology. Experiments were carried out in duplicate and the results are reported as the means of three independent assays. Pre-exposure to a red laser protected wild-type and uvrA-deficient E. coli cells against the lethal effect of UVC radiation, and increased the percentage of filamentation and the area-perimeter ratio, depending on UVC fluence and physiological conditions in the cells. Therapeutic, low-level red laser radiation can induce DNA lesions at a sub-lethal level. Consequences to cells and tissues should be considered when clinical protocols based on this laser are carried out.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

University of Turku, Faculty of Medicine, Department of Cardiology and Cardiovascular Medicine, Doctoral Programme of Clinical Investigation, Heart Center, Turku University Hospital, Turku, Finland Division of Internal Medicine, Department of Cardiology, Seinäjoki Central Hospital, Seinäjoki, Finland Heart Center, Satakunta Central Hospital, Pori, Finland Annales Universitatis Turkuensis Painosalama Oy, Turku, Finland 2015 Antithrombotic therapy during and after coronary procedures always entails the challenging establishment of a balance between bleeding and thrombotic complications. It has been generally recommended to patients on long-term warfarin therapy to discontinue warfarin a few days prior to elective coronary angiography or intervention to prevent bleeding complications. Bridging therapy with heparin is recommended for patients at an increased risk of thromboembolism who require the interruption of anticoagulation for elective surgery or an invasive procedure. In study I, consecutive patients on warfarin therapy referred for diagnostic coronary angiography were compared to control patients with a similar disease presentation without warfarin. The strategy of performing coronary angiography during uninterrupted therapeutic warfarin anticoagulation appeared to be a relatively safe alternative to bridging therapy, if the international normalized ratio level was not on a supratherapeutic level. In-stent restenosis remains an important reason for failure of long-term success after a percutaneous coronary intervention (PCI). Drug-eluting stents (DES) reduce the problem of restenosis inherent to bare metal stents (BMS). However, a longer delay in arterial healing may extend the risk of stent thrombosis (ST) far beyond 30 days after the DES implantation. Early discontinuation of antiplatelet therapy has been the most important predisposing factor for ST. In study II, patients on long-term oral anticoagulant (OAC) underwent DES or BMS stenting with a median of 3.5 years’follow-up. The selective use of DESs with a short triple therapy seemed to be safe in OAC patients, since late STs were rare even without long clopidogrel treatment. Major bleeding and cardiac events were common in this patient group irrespective of stent type. In order to help to predict the bleeding risk in patients on OAC, several different bleeding risk scorings have been developed. Risk scoring systems have also been used also in the setting of patients undergoing a PCI. In study III, the predictive value of an outpatient bleeding risk index (OBRI) to identify patients at high risk of bleeding was analysed. The bleeding risk seemed not to modify periprocedural or long-term treatment choices in patients on OAC after a percutaneous coronary intervention. Patients with a high OBRI often had major bleeding episodes, and the OBRI may be suitable for risk evaluation in this patient group. Optical coherence tomography (OCT) is a novel technology for imaging intravascular coronary arteries. OCT is a light-based imaging modality that enables a 12–18 µm tissue axial resolution to visualize plaques in the vessel, possible dissections and thrombi as well as, stent strut appositions and coverage, and to measure the vessel lumen and lesions. In study IV, 30 days after titanium-nitride-oxide (TITANOX)-coated stent implantation, the binary stent strut coverage was satisfactory and the prevalence of malapposed struts was low as evaluated by OCT. Long-term clinical events in patients treated with (TITANOX)-coated bio-active stents (BAS) and paclitaxel-eluting stents (PES) in routine clinical practice were examined in study V. At the 3-year follow-up, BAS resulted in better long-term outcome when compared with PES with an infrequent need for target vessel revascularization. Keywords: anticoagulation, restenosis, thrombosis, bleeding, optical coherence tomography, titanium

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increased awareness and evolved consumer habits have set more demanding standards for the quality and safety control of food products. The production of foodstuffs which fulfill these standards can be hampered by different low-molecular weight contaminants. Such compounds can consist of, for example residues of antibiotics in animal use or mycotoxins. The extremely small size of the compounds has hindered the development of analytical methods suitable for routine use, and the methods currently in use require expensive instrumentation and qualified personnel to operate them. There is a need for new, cost-efficient and simple assay concepts which can be used for field testing and are capable of processing large sample quantities rapidly. Immunoassays have been considered as the golden standard for such rapid on-site screening methods. The introduction of directed antibody engineering and in vitro display technologies has facilitated the development of novel antibody based methods for the detection of low-molecular weight food contaminants. The primary aim of this study was to generate and engineer antibodies against low-molecular weight compounds found in various foodstuffs. The three antigen groups selected as targets of antibody development cause food safety and quality defects in wide range of products: 1) fluoroquinolones: a family of synthetic broad-spectrum antibacterial drugs used to treat wide range of human and animal infections, 2) deoxynivalenol: type B trichothecene mycotoxin, a widely recognized problem for crops and animal feeds globally, and 3) skatole, or 3-methyindole is one of the two compounds responsible for boar taint, found in the meat of monogastric animals. This study describes the generation and engineering of antibodies with versatile binding properties against low-molecular weight food contaminants, and the consecutive development of immunoassays for the detection of the respective compounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Apoptotic beta cell death is an underlying cause majorly for type I and to a lesser extent for type II diabetes. Recently, MST1 kinase was identified as a key apoptotic agent in diabetic condition. In this study, I have examined MST1 and closely related kinases namely, MST2, MST3 and MST4, aiming to tackle diabetes by exploring ways to selectively block MST1 kinase activity. The first investigation was directed towards evaluating possibilities of selectively blocking the ATP binding site of MST1 kinase that is essential for the activity of the enzymes. Structure and sequence analyses of this site however revealed a near absolute conservation between the MSTs and very few changes with other kinases. The observed residue variations also displayed similar physicochemical properties making it hard for selective inhibition of the enzyme. Second, possibilities for allosteric inhibition of the enzyme were evaluated. Analysis of the recognized allosteric site also posed the same problem as the MSTs shared almost all of the same residues. The third analysis was made on the SARAH domain, which is required for the dimerization and activation of MST1 and MST2 kinases. MST3 and MST4 lack this domain, hence selectivity against these two kinases can be achieved. Other proteins with SARAH domains such as the RASSF proteins were also examined. Their interaction with the MST1 SARAH domain were evaluated to mimic their binding pattern and design a peptide inhibitor that interferes with MST1 SARAH dimerization. In molecular simulations the RASSF5 SARAH domain was shown to strongly interact with the MST1 SARAH domain and possibly preventing MST1 SARAH dimerization. Based on this, the peptidic inhibitor was suggested to be based on the sequence of RASSF5 SARAH domain. Since the MST2 kinase also interacts with RASSF5 SARAH domain, absolute selectivity might not be achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three grade three mathematics textbooks were selected arbitrarily (every other) from a total of six currently used in the schools of Ontario. These textbooks were examined through content analysis in order to determine the extent (i. e., the frequency of occurrence) to which problem solving strategies appear in the problems and exercises of grade three mathematics textbooks, and how well they carry through the Ministry's educational goals set out in The Formative Years. Based on Polya's heuristic model, a checklist was developed by the researcher. The checklist had two main categories, textbook problems and process problems and a finer classification according to the difficulty level of a textbook problem; also six commonly used problem solving strategies for the analysis of a process problem. Topics to be analyzed were selected from the subject guideline The Formative Years, and the same topics were selected from each textbook. Frequencies of analyzed problems and exercises were compiled and tabulated textbook by textbook and topic by topic. In making comparisons, simple frequency count and percentage were used in the absence of any known criteria available for judging highor low frequency. Each textbook was coded by three coders trained to use the checklist. The results of analysis showed that while there were large numbers of exercises in each textbook, not very many were framed as problems according to Polya' s model and that process problems form a small fraction of the number of analyzed problems and exercises. There was no pattern observed as to the systematic placement of problems in the textbooks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Twenty-eight grade four students were ca.tegorized as either high or low anxious subjects as per Gillis' Child Anxiety Scale (a self-report general measure). In determining impulsivity in their response tendencies, via Kagan's Ma.tching Familiar Figures Test, a significant difference between the two groups was not found to exist. Training procedures (verbal labelling plus rehearsal strategies) were introduced in modification of their learning behaviour on a visual sequential memory task. Significantly more reflective memory recall behaviour was noted by both groups as a result. Furthermore, transfer of the reflective quality of this learning strategy produced significantly less impulsive response behaviour for high and low anxious subjects with respect to response latency and for low anxious subjects with respect to response accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hub location problem is an NP-hard problem that frequently arises in the design of transportation and distribution systems, postal delivery networks, and airline passenger flow. This work focuses on the Single Allocation Hub Location Problem (SAHLP). Genetic Algorithms (GAs) for the capacitated and uncapacitated variants of the SAHLP based on new chromosome representations and crossover operators are explored. The GAs is tested on two well-known sets of real-world problems with up to 200 nodes. The obtained results are very promising. For most of the test problems the GA obtains improved or best-known solutions and the computational time remains low. The proposed GAs can easily be extended to other variants of location problems arising in network design planning in transportation systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chronic low back pain (CLBP) is a complex health problem of psychological manifestations not fully understood. Using interpretive phenomenological analysis, 11 semi-structured interviews were conducted to help understand the meaning of the lived experience of CLBP; focusing on the psychological response to pain and the role of depression, catastrophizing, fear-avoidance behavior, anxiety and somatization. Participants characterized CLBP as persistent tolerable low back pain (TLBP) interrupted by periods of intolerable low back pain (ILBP). ILBP contributed to recurring bouts of helplessness, depression, frustration with the medical system and increased fear based on the perceived consequences of anticipated recurrences, all of which were mediated by the uncertainty of such pain. During times of TLBP all participants pursued a permanent pain consciousness as they felt susceptible to experience a recurrence. As CLBP progressed, participants felt they were living with a weakness, became isolated from those without CLBP and integrated pain into their self-concept.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il existe un lien étroit entre la structure tridimensionnelle et la fonction cellulaire de l’ARN. Il est donc essentiel d’effectuer des études structurales de molécules d’ARN telles que les riborégulateurs afin de mieux caractériser leurs mécanismes d’action. Une technique de choix, permettant d’obtenir de l’information structurale sur les molécules d’ARN est la spectroscopie RMN. Cette technique est toutefois limitée par deux difficultés majeures. Premièrement, la préparation d’une quantité d’ARN nécessaire à ce type d’étude est un processus long et ardu. Afin de résoudre ce problème, notre laboratoire a développé une technique rapide de purification des ARN par affinité, utilisant une étiquette ARiBo. La deuxième difficulté provient du grand recouvrement des signaux présents sur les spectres RMN de molécules d’ARN. Ce recouvrement est proportionnel à la taille de la molécule étudiée, rendant la détermination de structures d’ARN de plus de 15 kDa extrêmement complexe. La solution émergeante à ce problème est le marquage isotopique spécifique des ARN. Cependant, les protocoles élaborées jusqu’à maintenant sont très coûteux, requièrent plusieurs semaines de manipulation en laboratoire et procurent de faibles rendements. Ce mémoire présente une nouvelle stratégie de marquage isotopique spécifique d’ARN fonctionnels basée sur la purification par affinité ARiBo. Cette approche comprend la séparation et la purification de nucléotides marqués, une ligation enzymatique sur support solide, ainsi que la purification d’ARN par affinité sans restriction de séquence. La nouvelle stratégie développée permet un marquage isotopique rapide et efficace d’ARN fonctionnels et devrait faciliter la détermination de structures d’ARN de grandes tailles par spectroscopie RMN.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work done in this master's thesis, presents a new system for the recognition of human actions from a video sequence. The system uses, as input, a video sequence taken by a static camera. A binary segmentation method of the the video sequence is first achieved, by a learning algorithm, in order to detect and extract the different people from the background. To recognize an action, the system then exploits a set of prototypes generated from an MDS-based dimensionality reduction technique, from two different points of view in the video sequence. This dimensionality reduction technique, according to two different viewpoints, allows us to model each human action of the training base with a set of prototypes (supposed to be similar for each class) represented in a low dimensional non-linear space. The prototypes, extracted according to the two viewpoints, are fed to a $K$-NN classifier which allows us to identify the human action that takes place in the video sequence. The experiments of our model conducted on the Weizmann dataset of human actions provide interesting results compared to the other state-of-the art (and often more complicated) methods. These experiments show first the sensitivity of our model for each viewpoint and its effectiveness to recognize the different actions, with a variable but satisfactory recognition rate and also the results obtained by the fusion of these two points of view, which allows us to achieve a high performance recognition rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’apprentissage supervisé de réseaux hiérarchiques à grande échelle connaît présentement un succès fulgurant. Malgré cette effervescence, l’apprentissage non-supervisé représente toujours, selon plusieurs chercheurs, un élément clé de l’Intelligence Artificielle, où les agents doivent apprendre à partir d’un nombre potentiellement limité de données. Cette thèse s’inscrit dans cette pensée et aborde divers sujets de recherche liés au problème d’estimation de densité par l’entremise des machines de Boltzmann (BM), modèles graphiques probabilistes au coeur de l’apprentissage profond. Nos contributions touchent les domaines de l’échantillonnage, l’estimation de fonctions de partition, l’optimisation ainsi que l’apprentissage de représentations invariantes. Cette thèse débute par l’exposition d’un nouvel algorithme d'échantillonnage adaptatif, qui ajuste (de fa ̧con automatique) la température des chaînes de Markov sous simulation, afin de maintenir une vitesse de convergence élevée tout au long de l’apprentissage. Lorsqu’utilisé dans le contexte de l’apprentissage par maximum de vraisemblance stochastique (SML), notre algorithme engendre une robustesse accrue face à la sélection du taux d’apprentissage, ainsi qu’une meilleure vitesse de convergence. Nos résultats sont présent ́es dans le domaine des BMs, mais la méthode est générale et applicable à l’apprentissage de tout modèle probabiliste exploitant l’échantillonnage par chaînes de Markov. Tandis que le gradient du maximum de vraisemblance peut-être approximé par échantillonnage, l’évaluation de la log-vraisemblance nécessite un estimé de la fonction de partition. Contrairement aux approches traditionnelles qui considèrent un modèle donné comme une boîte noire, nous proposons plutôt d’exploiter la dynamique de l’apprentissage en estimant les changements successifs de log-partition encourus à chaque mise à jour des paramètres. Le problème d’estimation est reformulé comme un problème d’inférence similaire au filtre de Kalman, mais sur un graphe bi-dimensionnel, où les dimensions correspondent aux axes du temps et au paramètre de température. Sur le thème de l’optimisation, nous présentons également un algorithme permettant d’appliquer, de manière efficace, le gradient naturel à des machines de Boltzmann comportant des milliers d’unités. Jusqu’à présent, son adoption était limitée par son haut coût computationel ainsi que sa demande en mémoire. Notre algorithme, Metric-Free Natural Gradient (MFNG), permet d’éviter le calcul explicite de la matrice d’information de Fisher (et son inverse) en exploitant un solveur linéaire combiné à un produit matrice-vecteur efficace. L’algorithme est prometteur: en terme du nombre d’évaluations de fonctions, MFNG converge plus rapidement que SML. Son implémentation demeure malheureusement inefficace en temps de calcul. Ces travaux explorent également les mécanismes sous-jacents à l’apprentissage de représentations invariantes. À cette fin, nous utilisons la famille de machines de Boltzmann restreintes “spike & slab” (ssRBM), que nous modifions afin de pouvoir modéliser des distributions binaires et parcimonieuses. Les variables latentes binaires de la ssRBM peuvent être rendues invariantes à un sous-espace vectoriel, en associant à chacune d’elles, un vecteur de variables latentes continues (dénommées “slabs”). Ceci se traduit par une invariance accrue au niveau de la représentation et un meilleur taux de classification lorsque peu de données étiquetées sont disponibles. Nous terminons cette thèse sur un sujet ambitieux: l’apprentissage de représentations pouvant séparer les facteurs de variations présents dans le signal d’entrée. Nous proposons une solution à base de ssRBM bilinéaire (avec deux groupes de facteurs latents) et formulons le problème comme l’un de “pooling” dans des sous-espaces vectoriels complémentaires.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les enjeux liés aux politiques éducatives ont considérablement changé au cours des dernières décennies. Ces changements sont liés, entre autres, à l’accroissement de l’imputabilité et de la reddition de compte qui est devenue une caractéristique importante des réformes curriculaires et pédagogiques. Les politiques à enjeux élevés exercent une pression énorme sur les districts et les écoles états-unienne afin qu’ils augmentent le rendement des élèves en utilisant des systèmes de conséquences (Hall & Ryan, 2011; Loeb & Strunk, 2007). Ces politiques envoient de puissants messages sur l'importance de certaines matières scolaires au détriment d'autres - circonscrivant les exigences en termes de compétences et de connaissances. La langue maternelle d’enseignement et les mathématiques sont devenues des mesures centrales sur lesquelles reposent l’évaluation et le degré de performance des districts et des écoles. Conséquemment, les administrateurs de districts et les directions d’écoles ont souvent recours à des réformes curriculaires et pédagogiques comme moyen d'augmenter le rendement des élèves dans les matières scolaires visées par ces politiques. Les politiques contraignent les acteurs scolaires de concentrer les ressources sur les programmes curriculaires et les évaluations, le développement professionnel, et la prise de décision pilotée par les données (Anagnostopoulos & Ruthledge, 2007; Honig & Hatch, 2004; Spillane, Diamond, et al., 2002; Weitz White & Rosenbaum, 2008). Cette thèse examine la manière dont les politiques à enjeux élevés opèrent quotidiennement dans les interactions et les pratiques au sein des écoles. Nous analysons plus particulièrement les différents messages provenant de la politique transmis aux acteurs scolaires sur les manières d'apporter des changements substantiels dans le curriculum et l'enseignement. Nous élargissons l’analyse en prenant en compte le rôle des administrateurs de district ainsi que des partenaires universitaires qui façonnent également la manière dont certains aspects des messages provenant des politiques sont transmis, négociés et/ou débattus et d’autres sont ignorés (Coburn & Woulfin, 2012). En utilisant l’analyse de discours, nous examinons le rôle du langage comme constituant et médiateur des interactions sociales entre les acteurs scolaires et d’autres parties prenantes. De telles analyses impliquent une investigation approfondie d’un nombre d’étude de cas limité. Les données utilisées dans cette thèse ont été colligées dans une école primaire états-unienne du mid-West. Cette étude de cas fait partie d’une étude longitudinale de quatre ans qui comprenait huit écoles dans les milieux urbains entre 1999 et 2003 (Distributed Leadership Studies, http://www.distributedleadership.org). La base de données analysée inclut des observations de réunions formelles et des entrevues auprès des administrateurs du district, des partenaires universitaires, de la direction d’école et des enseignants. En plus de l’introduction et de la problématique (chapitre 1) et de discussion et conclusion (chapitre 5), cette thèse comprend un ensemble de trois articles interdépendants. Dans le premier article (chapitre 2), nous effectuons une recension des écrits portant sur le domaine de l’implantation de politiques (policy implementation) et la complexité des relations locales, nationales et internationales dans les systèmes éducatifs. Pour démystifier cette complexité, nous portons une attention particulière à la construction de sens des acteurs scolaires comme étant une dimension clé du processus de mise en œuvre des réformes. Dans le deuxième article (chapitre 3), nous cherchons à comprendre les processus sociaux qui façonnent les réponses stratégiques des acteurs scolaires à l’égard des politiques du district et de l’état et en lien avec la mise en œuvre d’un curriculum prescrit en mathématiques. Plus particulièrement, nous explorons les différentes situations dans lesquelles les acteurs scolaires argumentent au sujet des changements curriculaires et pédagogiques proposés par les administrateurs de district et des partenaires universitaires afin d’augmenter les résultats scolaires en mathématiques dans une école à faible performance. Dans le troisième article (chapitre 4), nous cherchons à démystifier les complexités liées à l’amélioration de l’enseignement dans un environnement de politiques à enjeux élevés. Pour ce faire, nous utilisons l'interaction entre les notions d'agentivité et la structure afin d'analyser la manière dont les conceptions d’imputabilité et les idées qui découlent de l'environnement politique et les activités quotidiennes jouent dans les interactions entre les acteurs scolaires concernant sur l’enseignement de la langue maternelle. Nous explorons trois objectifs spécifiques : 1) la manière dont les politiques à enjeux élevés façonnent les éléments de l’enseignement qui sont reproduits et ceux qui sont transformés au fil du temps ; 2) la manière dont la compréhension des leaders de l’imputabilité façonne les aspects des messages politiques que les acteurs scolaires remarquent à travers les interactions et les conversations et 3) la manière les acteurs scolaires portent une attention particulière à certaines messages au détriment d’autres. Dans le dernier chapitre de cette thèse, nous discutons les forces et les limites de l’analyse secondaire de données qualitatives, les implications des résultats pour le domaine d’études de l’implantation de politiques et les pistes futures de recherches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study on the fabrication and characterization of spray pyrolysed cadmium sulphide homojunction solar cells. As an alternative to the conventional energy source, the PV technology has to be improved. Study about the factors affecting the performance of the existing solar cells and this will result in the enhancement of efficiency of the cells. At the same time it is equally important to have R&D works on developing new photovoltaic devices and processes which are less expensive for large scale production. CdS is an important binary compound semiconductor, which is very useful in the field of photovoltaics. It is very easy to prepare large area CdS thin films. In order to fabricate thin film homojunction cadmium sulphide cells, prepared and characterized SnO2 thin film as the lower electrode, p-CdS as the active layer and n-CdS as window layer. Cadmium material used for the fabrication of homojunction solar cells is highly toxic. The major damage due to continued exposure to low levels of cadmium are on the kidneys, lungs and bones. The real advantage of spray pyrolysis process is that there is no emission of any toxic gases during the deposition. Very low concentration of the chemicals is needed in this process. The risk involved from this material is very low, though they are toxic. On large scale usage it may become necessary that the cells after their life, should be bought back by the companies to retrieve chemicals like cadmium. This will reduce environmental problem and also the material wastage

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern computer systems are plagued with stability and security problems: applications lose data, web servers are hacked, and systems crash under heavy load. Many of these problems or anomalies arise from rare program behavior caused by attacks or errors. A substantial percentage of the web-based attacks are due to buffer overflows. Many methods have been devised to detect and prevent anomalous situations that arise from buffer overflows. The current state-of-art of anomaly detection systems is relatively primitive and mainly depend on static code checking to take care of buffer overflow attacks. For protection, Stack Guards and I-leap Guards are also used in wide varieties.This dissertation proposes an anomaly detection system, based on frequencies of system calls in the system call trace. System call traces represented as frequency sequences are profiled using sequence sets. A sequence set is identified by the starting sequence and frequencies of specific system calls. The deviations of the current input sequence from the corresponding normal profile in the frequency pattern of system calls is computed and expressed as an anomaly score. A simple Bayesian model is used for an accurate detection.Experimental results are reported which show that frequency of system calls represented using sequence sets, captures the normal behavior of programs under normal conditions of usage. This captured behavior allows the system to detect anomalies with a low rate of false positives. Data are presented which show that Bayesian Network on frequency variations responds effectively to induced buffer overflows. It can also help administrators to detect deviations in program flow introduced due to errors.