916 resultados para Data Mining and its Application
Resumo:
In this work, a protocol for the formulation of an enzyme concentrated product to be applied in fruit juice treatment is described. Downstream processing conditions for the recovery and concentration of pectinases produced by the new strain Aspergillus niger LB-02-SF in solid state cultivation were assessed. The solid-liquid ratio in the extraction step of pectinases recovery from the cultivated media was evaluated and the highest activity was obtained with a solid-liquid ratio of 1:10. The crude extract was concentrated by ultrafiltration and the total pectinase (TP) activity was 73.6-fold concentrated in relation to the crude extract, and a final TP titer of 663 U mL–1 was obtained with 73.7% of recovery yield. KCl and different glycerol concentrations were added to the concentrated extract and the stability of pectinases during the storage at 5°C for 59 weeks was tested. The formulation with 50% w/w glycerol was applied to the treatment of apple and grape juices and the results of these tests were statistically comparable to those obtained with two high-quality commercial preparations.
Resumo:
In this doctoral thesis, a tomographic STED microscopy technique for 3D super-resolution imaging was developed and utilized to observebone remodeling processes. To improve upon existing methods, wehave used a tomographic approach using a commercially available stimulated emission depletion (STED) microscope. A certain region of interest (ROI) was observed at two oblique angles: one at a standard inverted configuration from below (bottom view) and another from the side (side view) via a micro-mirror positioned close to the ROI. The two viewing angles were reconstructed into a final tomogram. The technique, named as tomographic STED microscopy, was able to achieve an axial resolution of approximately 70 nm on microtubule structures in a fixed biological specimen. High resolution imaging of osteoclasts (OCs) that are actively resorbing bone was achieved by creating an optically transparent coating on a microscope coverglass that imitates a fractured bone surface. 2D super-resolution STED microscopy on the bone layer showed approximately 60 nm of lateral resolution on a resorption associated organelle allowing these structures to be imaged with super-resolution microscopy for the first time. The developed tomographic STED microscopy technique was further applied to study resorption mechanisms of OCs cultured on the bone coating. The technique revealed actin cytoskeleton with specific structures, comet-tails, some of which were facing upwards and some others were facing downwards. This, in our opinion, indicated that during bone resorption, an involvement of the actin cytoskeleton in vesicular exocytosis and endocytosis is present. The application of tomographic STED microscopy in bone biology demonstrated that 3D super-resolution techniques can provide new insights into biological 3D nano-structures that are beyond the diffraction-limit when the optical constraints of super-resolution imaging are carefully taken into account.
Resumo:
This is a mixed methodology study that uses an autoethnographic approach to combine both an autobiography and a survey of practitioners who work in children’s mental health. It is largely about the implementation of Evidence-Based Practice (EBP), and the questions, concerns, experiences that I have had, and compared them with those of my fellow practitioners. In addition, it is about my journey as a mental health professional, and how I have come to recognize that in order to achieve the goals I wanted to achieve, I needed to return to university to pursue a Master’s degree. Within the research, I identify and discuss different definitions of EBP and identify several themes. I deconstruct the implementation of EBPs through the lens of Foucault and his notions of governmentality. I offer policy and practice recommendations to improve the implementation of EBP and the services received by children facing mental health issues.
Resumo:
L’objectif de cette thèse par articles est de présenter modestement quelques étapes du parcours qui mènera (on espère) à une solution générale du problème de l’intelligence artificielle. Cette thèse contient quatre articles qui présentent chacun une différente nouvelle méthode d’inférence perceptive en utilisant l’apprentissage machine et, plus particulièrement, les réseaux neuronaux profonds. Chacun de ces documents met en évidence l’utilité de sa méthode proposée dans le cadre d’une tâche de vision par ordinateur. Ces méthodes sont applicables dans un contexte plus général, et dans certains cas elles on tété appliquées ailleurs, mais ceci ne sera pas abordé dans le contexte de cette de thèse. Dans le premier article, nous présentons deux nouveaux algorithmes d’inférence variationelle pour le modèle génératif d’images appelé codage parcimonieux “spike- and-slab” (CPSS). Ces méthodes d’inférence plus rapides nous permettent d’utiliser des modèles CPSS de tailles beaucoup plus grandes qu’auparavant. Nous démontrons qu’elles sont meilleures pour extraire des détecteur de caractéristiques quand très peu d’exemples étiquetés sont disponibles pour l’entraînement. Partant d’un modèle CPSS, nous construisons ensuite une architecture profonde, la machine de Boltzmann profonde partiellement dirigée (MBP-PD). Ce modèle a été conçu de manière à simplifier d’entraînement des machines de Boltzmann profondes qui nécessitent normalement une phase de pré-entraînement glouton pour chaque couche. Ce problème est réglé dans une certaine mesure, mais le coût d’inférence dans le nouveau modèle est relativement trop élevé pour permettre de l’utiliser de manière pratique. Dans le deuxième article, nous revenons au problème d’entraînement joint de machines de Boltzmann profondes. Cette fois, au lieu de changer de famille de modèles, nous introduisons un nouveau critère d’entraînement qui donne naissance aux machines de Boltzmann profondes à multiples prédictions (MBP-MP). Les MBP-MP sont entraînables en une seule étape et ont un meilleur taux de succès en classification que les MBP classiques. Elles s’entraînent aussi avec des méthodes variationelles standard au lieu de nécessiter un classificateur discriminant pour obtenir un bon taux de succès en classification. Par contre, un des inconvénients de tels modèles est leur incapacité de générer deséchantillons, mais ceci n’est pas trop grave puisque la performance de classification des machines de Boltzmann profondes n’est plus une priorité étant donné les dernières avancées en apprentissage supervisé. Malgré cela, les MBP-MP demeurent intéressantes parce qu’elles sont capable d’accomplir certaines tâches que des modèles purement supervisés ne peuvent pas faire, telles que celle de classifier des données incomplètes ou encore celle de combler intelligemment l’information manquante dans ces données incomplètes. Le travail présenté dans cette thèse s’est déroulé au milieu d’une période de transformations importantes du domaine de l’apprentissage à réseaux neuronaux profonds qui a été déclenchée par la découverte de l’algorithme de “dropout” par Geoffrey Hinton. Dropout rend possible un entraînement purement supervisé d’architectures de propagation unidirectionnel sans être exposé au danger de sur- entraînement. Le troisième article présenté dans cette thèse introduit une nouvelle fonction d’activation spécialement con ̧cue pour aller avec l’algorithme de Dropout. Cette fonction d’activation, appelée maxout, permet l’utilisation de aggrégation multi-canal dans un contexte d’apprentissage purement supervisé. Nous démontrons comment plusieurs tâches de reconnaissance d’objets sont mieux accomplies par l’utilisation de maxout. Pour terminer, sont présentons un vrai cas d’utilisation dans l’industrie pour la transcription d’adresses de maisons à plusieurs chiffres. En combinant maxout avec une nouvelle sorte de couche de sortie pour des réseaux neuronaux de convolution, nous démontrons qu’il est possible d’atteindre un taux de succès comparable à celui des humains sur un ensemble de données coriace constitué de photos prises par les voitures de Google. Ce système a été déployé avec succès chez Google pour lire environ cent million d’adresses de maisons.
Resumo:
This work envisages the fermentation of prawn shell waste into a more nutritious product with simpler components for application as a feed ingredient in aquaculture. This product would be a rich source of protein along with chitin, minerals, vitamins and N-acetyl glucosamine. A brief description of the various processing (chemical and bioprocess) methods employed for chitin, chitosan and single sell protein preparations from shell waste. It deals with the isolation of micro flora associated with prawn shell degradation. It describes the methods adopted for fermentation of prawn shell degradation and fermentation of prawn shell waste with the selected highly chitinoclastic strains. The comparison of SSF and SmF for each selected strain in terms of enrichment of protein, lipid and carbohydrate in the fermented product was done. Detailed analysis of product quality is discussed. The feed for mulation and feeding experiment explained in detail. Statistical analysis of various biogrowth parameters was done with Duncan’s multiple range test. Very briefly explains 28 days of feeding experiment. A method for the complete utilization of shell waste explains with the help of experiments.
Resumo:
Present work is aimed at development of an appropriate microbial technology for protection of larvae of macrobrachium rosenbergii from disease and to increase survival rate in hatcheries. Application of immunostimulants to activate the immune system of cultured animals against pathogen is the widely accepted alternative to antibiotics in aquaculture. The most important immunostimulant is glucan. Therefore a research programme entitled as extraction of glucan from Acremonium diospyri and its application in macrobrachium rosenbergii larval rearing system along with bacterians as microspheres. The main objectives of the study are development of aquaculture grade glucan from acremonium diospyri, microencapsulated drug delivery system for the larvae of M. rosenbergii and microencapsulated glucan with bacterian preparation for the enhanced production of M. rosenbergii in larval rearing system. Based on the results of field trials microencapsulated glucan with bacterin preparation, it is concluded that the microencapsulated preparation at a concentration of 25g per million larvae once in seven days will enhance the production and quality seed of M. rosenbergii.
Resumo:
In the present studies it is clear that Bacillus pumilus xylanase is having the characteristic suited for an industrial enzyme (xylanases that are active and stable at elevated temperatures and alkaline pH are needed). SSF production of xylanases and its application appears to be an innovative technology where the fermented substrate is the enzyme source that is used directly in the bleaching process without a prior downstream processing. The direct use of SSF enzymes in bleaching is a relatively new biobleaching approach. This can certainly benefit the bleaching process to lower the xylanase production costs and improve the economics and viability of the biobleaching technology. The application of enzymes to the bleaching process has been considered as an environmentally friendly approach that can reduce the negative impact on the environment exerted by the use of chlorine-based bleaching agents. It has been demonstrated that pretreatment of kraft pulp with xylanase prior to bleaching (biobleaching) can facilitate subsequent removal of lignin by bleaching chemicals, thereby, reducing the demand for elemental chlorine or improving final paper brightness. Using this xylanase pre-treatment, has resulted in an increased of brightness (8.5 Unit) when compared to non-enzymatic treated bleached pulp prepared using identical conditions. Reduction of the consumption of active chlorine can be achieved which results in a decrease in the toxicity, colour, chloride and absorbable organic halogen (AOX) levels of bleaching effluents. The xylanase treatment improves drainage, strength properties and the fragility of pulps, and also increases the brightness of pulps. This positive result shows that enzyme pre-treatment facilitates the removal of chromophore fragments of pulp there by making the process more environment friendly
Resumo:
TRMM Microwave Imager (TMI) is reported to be a useful sensor to measure the atmospheric and oceanic parameters even in cloudy conditions. Vertically integrated specific humidity, Total Precipitable Water (TPW) retrieved from the water vapour absorption channel (22GHz.) along with 10m wind speed and rain rate derived from TMI is used to investigate the moisture variation over North Indian Ocean. Intraseasonal Oscillations (ISO) of TPW during the summer monsoon seasons 1998, 1999, and 2000 over North Indian Ocean is explored using wavelet analysis. The dominant waves in TPW during the monsoon periods and the differences in ISO over Arabian Sea and Bay of Bengal are investigated. The northward propagation of TPW anomaly and its coherence with the coastal rainfall is also studied. For the diagnostic study of heavy rainfall spells over the west coast, the intrusion of TPW over the North Arabian Sea is seen to be a useful tool.
Resumo:
This thesis presents a detailed account of a cost - effective approach towards enhanced production of alkaline protease at profitable levels using different fermentation designs employing cheap agro-industrial residues. It involves the optimisation of process parameters for the production of a thermostable alkaline protease by Vibrio sp. V26 under solid state, submerged and biphasic fermentations, production of the enzyme using cell immobilisation technology and the application of the crude enzyme on the deproteinisation of crustacean waste.The present investigation suggests an economic move towards Improved production of alkaline protease at gainful altitudes employing different fermentation designs utilising inexpensive agro-industrial residues. Moreover, the use of agro-industrial and other solid waste substrates for fermentation helps to provide a substitute in conserving the already dwindling global energy resources. Another alternative for accomplishing economically feasible production is by the use of immobilisation technique. This method avoids the wasteful expense of continually growing microorganisms. The high protease producing potential of the organism under study ascertains their exploitation in the utilisation and management of wastes. However, strain improvement studies for the production of high yielding variants using mutagens or by gene transfer are required before recommending them to Industries.Industries, all over the world, have made several attempts to exploit the microbial diversity of this planet. For sustainable development, it is essential to discover, develop and defend this natural prosperity. The Industrial development of any country is critically dependent on the intellectual and financial investment in this area. The need of the hour is to harness the beneficial uses of microbes for maximum utilisation of natural resources and technological yields. Owing to the multitude of applications in a variety of industrial sectors, there has always been an increasing demand for novel producers and resources of alkaline proteases as well as for innovative methods of production at a commercial altitude. This investigation forms a humble endeavour towards this perspective and bequeaths hope and inspiration for inventions to follow.
Resumo:
Eventhough a large number of schemes have been proposed and develoned for N9 laser ouined dye lasers the relatively low efficiency compelled the scientists to device new methods to improve the system efficiencs. Energy transfer mechanism has been shown to he a convenien tool for the enhancement of efficiency of dye lasers. Th p resent work covers a detailed study of the performance characteristics of a N2 laser pumped dye laser in the con— ventional mode and also, when pumped by the energy transfer mechanism. For .th.e present investigations a dye laser pumped by a'N2 laser (A4200 kw peak power) was fabricated. The grating at grazing incidence was used as the beam expanding device; A t its best performance the system was giving an output peak power of l5 kW for a 5 X lC"3H/l Rh—€ solution in methanol. T he conversion efficiency was 7.5; The output beam was having 3 divergence of 2 mrad and bandwidth o.9 A. Suitable modifications were suggested for obtaining better conversion efficiency and bandwidth.
Resumo:
In this paper we propose a generalization of the density functional theory. The theory leads to single-particle equations of motion with a quasilocal mean-field operator, which contains a quasiparticle position-dependent effective mass and a spin-orbit potential. The energy density functional is constructed using the extended Thomas-Fermi approximation and the ground-state properties of doubly magic nuclei are considered within the framework of this approach. Calculations were performed using the finite-range Gogny D1S forces and the results are compared with the exact Hartree-Fock calculations
Resumo:
Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.
Resumo:
In Colombia, the definitions of crimes are found in the special part of the Criminal Code. They are set out as though the only way of to commit them was by being the author of the punishable conduct there described. Nevertheless, in the general part of the Criminal Code, we find other forms of criminal responsibility for the conduct described in the special part. Complicity is one of those ways. Although complicity is established in the Criminal Code, it does not appear as a criminal definition and, as such it is ignored or, worse, applied incorrectly. For those reasons, this article tries to reflect upon the validity of complicity. Accordingly, this article analyzes: what is complicity, what is the difference between complicity and co-authorship and how would complicity work in some practical cases related to socio-economic crimes, in which it has being overlooked or, has been used incorrectly. The article concludes that complicity is still valid, and that the correct application of complicity materializes the ideal of justice.