826 resultados para Stack Overflow


Relevância:

10.00% 10.00%

Publicador:

Resumo:

La détermination de la structure tertiaire du ribosome fut une étape importante dans la compréhension du mécanisme de la synthèse des protéines. Par contre, l’élucidation de la structure du ribosome comme tel ne permet pas une compréhension de sa fonction. Pour mieux comprendre la nature des relations entre la structure et la fonction du ribosome, sa structure doit être étudiée de manière systématique. Au cours des dernières années, nous avons entrepris une démarche systématique afin d’identifier et de caractériser de nouveaux motifs structuraux qui existent dans la structure du ribosome et d’autres molécules contenant de l’ARN. L’analyse de plusieurs exemples d’empaquetage de deux hélices d’ARN dans la structure du ribosome nous a permis d’identifier un nouveau motif structural, nommé « G-ribo ». Dans ce motif, l’interaction d’une guanosine dans une hélice avec le ribose d’un nucléotide d’une autre hélice donne naissance à un réseau d’interactions complexes entre les nucléotides voisins. Le motif G-ribo est retrouvé à 8 endroits dans la structure du ribosome. La structure du G-ribo possède certaines particularités qui lui permettent de favoriser la formation d’un certain type de pseudo-nœuds dans le ribosome. L’analyse systématique de la structure du ribosome et de la ARNase P a permis d’identifier un autre motif structural, nommé « DTJ » ou « Double-Twist Joint motif ». Ce motif est formé de trois courtes hélices qui s’empilent l’une sur l’autre. Dans la zone de contact entre chaque paire d’hélices, deux paires de bases consécutives sont surenroulées par rapport à deux paires de bases consécutives retrouvées dans l’ARN de forme A. Un nucléotide d’une paire de bases est toujours connecté directement à un nucléotide de la paire de bases surenroulée, tandis que les nucléotides opposés sont connectés par un ou plusieurs nucléotides non appariés. L’introduction d’un surenroulement entre deux paires de bases consécutives brise l’empilement entre les nucléotides et déstabilise l’hélice d’ARN. Dans le motif DTJ, les nucléotides non appariés qui lient les deux paires de bases surenroulées interagissent avec une des trois hélices qui forment le motif, offrant ainsi une stratégie élégante de stabilisation de l’arrangement. Pour déterminer les contraintes de séquences imposées sur la structure tertiaire d’un motif récurrent dans le ribosome, nous avons développé une nouvelle approche expérimentale. Nous avons introduit des librairies combinatoires de certains nucléotides retrouvés dans des motifs particuliers du ribosome. Suite à l’analyse des séquences alternatives sélectionnées in vivo pour différents représentants d’un motif, nous avons été en mesure d’identifier les contraintes responsables de l’intégrité d’un motif et celles responsables d’interactions avec les éléments qui forment le contexte structural du motif. Les résultats présentés dans cette thèse élargissent considérablement notre compréhension des principes de formation de la structure d’ARN et apportent une nouvelle façon d’identifier et de caractériser de nouveaux motifs structuraux d’ARN.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Par son recours à diverses formes d’expression, Unica Zürn (1916-1970) redynamise l’espace de la feuille en le faisant activement participer à l’écriture de soi. Le « je » semble en effet se démultiplier grâce à des jeux « anagrammatiques » sur les divers signes mobilisés, qu’ils soient alphabétiques, picturaux ou musicaux. L’autoreprésentation s’inscrit alors au sein d’une œuvre pluridisciplinaire inusitée, qui renouvelle l’esthétique quelque peu « essoufflée » du mouvement surréaliste. Cette attitude vis-à-vis de la feuille, où plusieurs signes « décomposables » et « recomposables » se partagent l’espace, est observable tant dans l’œuvre littéraire que picturale d’Unica Zürn. Le processus de création par la lettre, le trait et la note de musique, tend à revaloriser le support matériel utilisé par l’effacement des frontières entre les disciplines artistiques, qui appelle un regard distancié du lecteur/spectateur sur l’œuvre. Afin d’interpréter les travaux de Zürn dans la pluralité des moyens artistiques qui y sont déployés, l’approche intermédiale sera favorisée. Dans le premier chapitre de ce mémoire, il s’agira d’abord de voir comment s’articule un certain dialogue entre le discours des chercheurs de l’intermédialité et l’œuvre d’Unica Zürn. Le rapport à l’objet sera notre porte d’entrée dans la matière intermédiale. Par un retour à la matérialité de l’expérience médiatique, nous constaterons que Zürn met à l’avant-scène les instruments et supports de la création, ce qui mène à une représentation distorsionnée de soi, de l’espace et du temps. Une fois le parallèle établi entre les concepts de l’intermédialité et les travaux de Zürn, nous nous concentrerons sur le pan musical de l’œuvre pluridisciplinaire de l’auteure-artiste. Le second chapitre traitera de l’intrusion du sonore dans l’univers textuel, qui se fera notamment par la réappropriation de l’opéra Norma, de Vincenzo Bellini. Cette réécriture s’intitule Les jeux à deux et prend une distance considérable par rapport au texte originel. Elle s’accompagne de dessins effectués par l’auteure-artiste à même les partitions de l’opéra. Le regard multiple posé sur l’œuvre zürnienne permettra de comprendre que l’écriture palimpseste participe du processus d’autoreprésentation, tout en élaborant un discours sur la rencontre entre littérature, dessin et musique, ainsi que sur l’influence de cette juxtaposition sur le débordement des frontières médiatiques traditionnelles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Les résultats ont été obtenus avec le logiciel "Insight-2" de Accelris (San Diego, CA)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The photoacoustic investigations carried out on different photonic materials are presented in this thesis. Photonic materials selected for the investigation are tape cast ceramics, muItilayer dielectric coatings, organic dye doped PVA films and PMMA matrix doped with dye mixtures. The studies are performed by the measurement of photoacoustic signal generated as a result of modulated cw laser irradiation of samples. The gas-microphone scheme is employed for the detection of photoacoustic signal. The different measurements reported here reveal the adaptability and utility of the PA technique for the characterization of photonic materials.Ceramics find applications in the field of microelectronics industry. Tape cast ceramics are the building blocks of many electronic components and certain ceramic tapes are used as thermal barriers. The thermal parameters of these tapes will not be the same as that of thin films of the same materials. Parameters are influenced by the presence of foreign bodies in the matrix and the sample preparation technique. Measurements are done on ceramic tapes of Zirconia, Zirconia-Alumina combination, barium titanate, barium tin titanate, silicon carbide, lead zirconate titanateil'Z'T) and lead magnesium niobate titanate(PMNPT). Various configurations viz. heat reflection geometry and heat transmission geometry of the photoacoustic technique have been used for the evaluation of different thermal parameters of the sample. Heat reflection geometry of the PA cell has been used for the evaluation of thermal effusivity and heat transmission geometry has been made use of in the evaluation of thermal diffusivity. From the thermal diffusivity and thermal effusivity values, thermal conductivity is also calculated. The calculated values are nearly the same as the values reported for pure materials. This shows the feasibility of photoacoustic technique for the thermal characterization of ceramic tapes.Organic dyes find applications as holographic recording medium and as active media for laser operations. Knowledge of the photochemical stability of the material is essential if it has to be used tor any of these applications. Mixing one dye with another can change the properties of the resulting system. Through careful mixing of the dyes in appropriate proportions and incorporating them in polymer matrices, media of required stability can be prepared. Investigations are carried out on Rhodamine 6GRhodamine B mixture doped PMMA samples. Addition of RhB in small amounts is found to stabilize Rh6G against photodegradation and addition of Rh6G into RhB increases the photosensitivity of the latter. The PA technique has been successfully employed for the monitoring of dye mixture doped PMMA sample. The same technique has been used for the monitoring of photodegradation ofa laser dye, cresyl violet doped polyvinyl alcohol also.Another important application of photoacoustic technique is in nondestructive evaluation of layered samples. Depth profiling capability of PA technique has been used for the non-destructive testing of multilayer dielectric films, which are highly reflecting in the wavelength range selected for investigations. Eventhough calculation of thickness of the film is not possible, number of layers present in the system can be found out using PA technique. The phase plot has clear step like discontinuities, the number of which coincides with the number of layers present in the multilayer stack. This shows the sensitivity of PA signal phase to boundaries in a layered structure. This aspect of PA signal can be utilized in non-destructive depth profiling of reflecting samples and for the identification of defects in layered structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Present work deals with the Preparation and characterization of high-k aluminum oxide thin films by atomic layer deposition for gate dielectric applications.The ever-increasing demand for functionality and speed for semiconductor applications requires enhanced performance, which is achieved by the continuous miniaturization of CMOS dimensions. Because of this miniaturization, several parameters, such as the dielectric thickness, come within reach of their physical limit. As the required oxide thickness approaches the sub- l nm range, SiO 2 become unsuitable as a gate dielectric because its limited physical thickness results in excessive leakage current through the gate stack, affecting the long-term reliability of the device. This leakage issue is solved in the 45 mn technology node by the integration of high-k based gate dielectrics, as their higher k-value allows a physically thicker layer while targeting the same capacitance and Equivalent Oxide Thickness (EOT). Moreover, Intel announced that Atomic Layer Deposition (ALD) would be applied to grow these materials on the Si substrate. ALD is based on the sequential use of self-limiting surface reactions of a metallic and oxidizing precursor. This self-limiting feature allows control of material growth and properties at the atomic level, which makes ALD well-suited for the deposition of highly uniform and conformal layers in CMOS devices, even if these have challenging 3D topologies with high aspect-ratios. ALD has currently acquired the status of state-of-the-art and most preferred deposition technique, for producing nano layers of various materials of technological importance. This technique can be adapted to different situations where precision in thickness and perfection in structures are required, especially in the microelectronic scenario.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pollutants that once enter into the earth’s atmosphere become part of the atmosphere and hence their dispersion, dilution, direction of transportation etc. are governed by the meteorological conditions. The thesis deals with the study of the atmospheric dispersion capacity, wind climatology, atmospheric stability, pollutant distribution by means of a model and the suggestions for a comprehensive planning for the industrially developing city, Cochin. The definition, sources, types and effects of air pollution have been dealt with briefly. The influence of various meteorological parameters such as vector wind, temperature and its vertical structure and atmospheric stability in relation to pollutant dispersal have been studied. The importance of inversions, mixing heights, ventilation coefficients were brought out. The spatial variation of mixing heights studies for the first time on a microscale region, serves to delineate the regions of good and poor dispersal capacity. A study of wind direction fluctuation, σθ and its relation to stability and mixing heights were shown to be much useful. It was shown that there is a necessity to look into the method of σθ computation. The development of Gausssian Plume Model along with the application for multiple sources was presented. The pollutant chosen was sulphur dioxide and industrial sources alone were considered. The percentage frequency of occurrence of inversions and isothermals are found to be low in all months during the year. The spatial variation of mixing heights revealed that a single mixing height cannot be taken as a representative for the whole city have low mixing heights and monsoonal months showed lowest mixing heights. The study of ventilation co-efficients showed values less than the required optimum value 6000m2/5. However, the low values may be due to the consideration of surface wind alone instead of the vertically averaged wind. Relatively more calm conditions and light winds during night and strong winds during day time were observed. During the most of the year westerlies during day time and northeasterlies during night time are the dominant winds. Unstable conditions with high values of σθ during day time and stable conditions with lower values of σθ during night time are the prominent features. Monsoonal months showed neutral stability for most of the time. A study σθ of and Pasquill Stability category has revealed the difficulty in giving a unique value of for each stability category. For the first time regression equations have been developed relating mixing heights and σθ. A closer examination of σθ revealed that half of the range of wind direction fluctuations is to be taken, instead of one by sixth, to compute σθ. The spatial distribution of SO2 showed a more or less uniform distribution with a slight intrusion towards south. Winter months showed low concentrations contrary to the expectations. The variations of the concentration is found to be influenced more by the mixing height and the stack height rather than wind speed. In the densely populated areas the concentration is more than the threshold limit value. However, the values reported appear to be high, because no depletion of the material is assumed through dry or wet depositions and also because of the inclusion of calm conditions with a very light wind speed. A reduction of emission during night time with a consequent rise during day time would bring down the levels of pollution. The probable locations for the new industries could be the extreme southeast parts because the concentration towards the north falls off very quickly resulting low concentrations. In such a case pollutant spread would be towards south and west, thus keeping the city interior relatively free from pollution. A more detailed examination of the pollutant spread by means of models that would take the dry and wet depositions may be necessary. Nevertheless, the present model serves to give the trend of the distribution of pollutant concentration with which one can suggest the optimum locations for the new industries

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Wireless Sensor Networks (WSN), neglecting the effects of varying channel quality can lead to an unnecessary wastage of precious battery resources and in turn can result in the rapid depletion of sensor energy and the partitioning of the network. Fairness is a critical issue when accessing a shared wireless channel and fair scheduling must be employed to provide the proper flow of information in a WSN. In this paper, we develop a channel adaptive MAC protocol with a traffic-aware dynamic power management algorithm for efficient packet scheduling and queuing in a sensor network, with time varying characteristics of the wireless channel also taken into consideration. The proposed protocol calculates a combined weight value based on the channel state and link quality. Then transmission is allowed only for those nodes with weights greater than a minimum quality threshold and nodes attempting to access the wireless medium with a low weight will be allowed to transmit only when their weight becomes high. This results in many poor quality nodes being deprived of transmission for a considerable amount of time. To avoid the buffer overflow and to achieve fairness for the poor quality nodes, we design a Load prediction algorithm. We also design a traffic aware dynamic power management scheme to minimize the energy consumption by continuously turning off the radio interface of all the unnecessary nodes that are not included in the routing path. By Simulation results, we show that our proposed protocol achieves a higher throughput and fairness besides reducing the delay

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Safety critical software failure can have a high price. Such software should be free of errors before it is put into operation. Application of formal methods in the Software Development Life Cycle helps to ensure that the software for safety critical missions are ultra reliable. PVS theorem prover, a formal method tool, can be used for the formal verification of software in ADA Language for Flight Software Application (ALFA.). This paper describes the modeling of ALFA programs for PVS theorem prover. An ALFA2PVS translator is developed which automatically converts the software in ALFA to PVS specification. By this approach the software can be verified formally with respect to underflow/overflow errors and divide by zero conditions without the actual execution of the code.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Safety critical software failure can have a high price. Such software should be free of errors before it is put into operation. Application of formal methods in the Software Development Life Cycle helps to ensure that the software for safety critical missions are ultra reliable. PVS theorem prover, a formal method tool, can be used for the formal verification of software in ADA Language for Flight Software Application (ALFA.). This paper describes the modeling of ALFA programs for PVS theorem prover. An ALFA2PVS translator is developed which automatically converts the software in ALFA to PVS specification. By this approach the software can be verified formally with respect to underflow/overflow errors and divide by zero conditions without the actual execution of the code

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With this document, we provide a compilation of in-depth discussions on some of the most current security issues in distributed systems. The six contributions have been collected and presented at the 1st Kassel Student Workshop on Security in Distributed Systems (KaSWoSDS’08). We are pleased to present a collection of papers not only shedding light on the theoretical aspects of their topics, but also being accompanied with elaborate practical examples. In Chapter 1, Stephan Opfer discusses Viruses, one of the oldest threats to system security. For years there has been an arms race between virus producers and anti-virus software providers, with no end in sight. Stefan Triller demonstrates how malicious code can be injected in a target process using a buffer overflow in Chapter 2. Websites usually store their data and user information in data bases. Like buffer overflows, the possibilities of performing SQL injection attacks targeting such data bases are left open by unwary programmers. Stephan Scheuermann gives us a deeper insight into the mechanisms behind such attacks in Chapter 3. Cross-site scripting (XSS) is a method to insert malicious code into websites viewed by other users. Michael Blumenstein explains this issue in Chapter 4. Code can be injected in other websites via XSS attacks in order to spy out data of internet users, spoofing subsumes all methods that directly involve taking on a false identity. In Chapter 5, Till Amma shows us different ways how this can be done and how it is prevented. Last but not least, cryptographic methods are used to encode confidential data in a way that even if it got in the wrong hands, the culprits cannot decode it. Over the centuries, many different ciphers have been developed, applied, and finally broken. Ilhan Glogic sketches this history in Chapter 6.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Excimerlaser sind gepulste Gaslaser, die Laseremission in Form von Linienstrahlung – abhängig von der Gasmischung – im UV erzeugen. Der erste entladungsgepumpte Excimerlaser wurde 1977 von Ischenko demonstriert. Alle kommerziell verfügbaren Excimerlaser sind entladungsgepumpte Systeme. Um eine Inversion der Besetzungsdichte zu erhalten, die notwendig ist, um den Laser zum Anschwingen zu bekommen, muss aufgrund der kurzen Wellenlänge sehr stark gepumpt werden. Diese Pumpleistung muss von einem Impulsleistungsmodul erzeugt werden. Als Schaltelement gebräuchlich sind Thyratrons, Niederdruckschaltröhren, deren Lebensdauer jedoch sehr limitiert ist. Deshalb haben sich seit Mitte der 1990iger Jahre Halbleiterschalter mit Pulskompressionsstufen auch in dieser Anwendung mehr und mehr durchgesetzt. In dieser Arbeit wird versucht, die Pulskompression durch einen direkt schaltenden Halbleiterstapel zu ersetzen und dadurch die Verluste zu reduzieren sowie den Aufwand für diese Pulskompression einzusparen. Zudem kann auch die maximal mögliche Repetitionsrate erhöht werden. Um die Belastung der Bauelemente zu berechnen, wurden für alle Komponenten möglichst einfache, aber leistungsfähige Modelle entwickelt. Da die normalerweise verfügbaren Daten der Bauelemente sich aber auf andere Applikationen beziehen, mussten für alle Bauteile grundlegende Messungen im Zeitbereich der späteren Applikation gemacht werden. Für die nichtlinearen Induktivitäten wurde ein einfaches Testverfahren entwickelt um die Verluste bei sehr hohen Magnetisierungsgeschwindigkeiten zu bestimmen. Diese Messungen sind die Grundlagen für das Modell, das im Wesentlichen eine stromabhängige Induktivität beschreibt. Dieses Modell wurde für den „magnetic assist“ benützt, der die Einschaltverluste in den Halbleitern reduziert. Die Impulskondensatoren wurden ebenfalls mit einem in der Arbeit entwickelten Verfahren nahe den späteren Einsatzparametern vermessen. Dabei zeigte sich, dass die sehr gebräuchlichen Class II Keramikkondensatoren für diese Anwendung nicht geeignet sind. In der Arbeit wurden deshalb Class I Hochspannungs- Vielschicht- Kondensatoren als Speicherbank verwendet, die ein deutlich besseres Verhalten zeigen. Die eingesetzten Halbleiterelemente wurden ebenfalls in einem Testverfahren nahe den späteren Einsatzparametern vermessen. Dabei zeigte sich, dass nur moderne Leistungs-MOSFET´s für diesen Einsatz geeignet sind. Bei den Dioden ergab sich, dass nur Siliziumkarbid (SiC) Schottky Dioden für die Applikation einsetzbar sind. Für die Anwendung sind prinzipiell verschiedene Topologien möglich. Bei näherer Betrachtung zeigt sich jedoch, dass nur die C-C Transfer Anordnung die gewünschten Ergebnisse liefern kann. Diese Topologie wurde realisiert. Sie besteht im Wesentlichen aus einer Speicherbank, die vom Netzteil aufgeladen wird. Aus dieser wird dann die Energie in den Laserkopf über den Schalter transferiert. Aufgrund der hohen Spannungen und Ströme müssen 24 Schaltelemente in Serie und je 4 parallel geschaltet werden. Die Ansteuerung der Schalter wird über hochisolierende „Gate“-Transformatoren erreicht. Es zeigte sich, dass eine sorgfältig ausgelegte dynamische und statische Spannungsteilung für einen sicheren Betrieb notwendig ist. In der Arbeit konnte ein Betrieb mit realer Laserkammer als Last bis 6 kHz realisiert werden, der nur durch die maximal mögliche Repetitionsrate der Laserkammer begrenzt war.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Summary - Cooking banana is one of the most important crops in Uganda; it is a staple food and source of household income in rural areas. The most common cooking banana is locally called matooke, a Musa sp triploid acuminate genome group (AAA-EAHB). It is perishable and traded in fresh form leading to very high postharvest losses (22-45%). This is attributed to: non-uniform level of harvest maturity, poor handling, bulk transportation and lack of value addition/processing technologies, which are currently the main challenges for trade and export, and diversified utilization of matooke. Drying is one of the oldest technologies employed in processing of agricultural produce. A lot of research has been carried out on drying of fruits and vegetables, but little information is available on matooke. Drying of matooke and milling it to flour extends its shelf-life is an important means to overcome the above challenges. Raw matooke flour is a generic flour developed to improve shelf stability of the fruit and to find alternative uses. It is rich in starch (80 - 85%db) and subsequently has a high potential as a calorie resource base. It possesses good properties for both food and non-food industrial use. Some effort has been done to commercialize the processing of matooke but there is still limited information on its processing into flour. It was imperative to carry out an in-depth study to bridge the following gaps: lack of accurate information on the maturity window within which matooke for processing into flour can be harvested leading to non-uniform quality of matooke flour; there is no information on moisture sorption isotherm for matooke from which the minimum equilibrium moisture content in relation to temperature and relative humidity is obtainable, below which the dry matooke would be microbiologically shelf-stable; and lack of information on drying behavior of matooke and standardized processing parameters for matooke in relation to physicochemical properties of the flour. The main objective of the study was to establish the optimum harvest maturity window and optimize the processing parameters for obtaining standardized microbiologically shelf-stable matooke flour with good starch quality attributes. This research was designed to: i) establish the optimum maturity harvest window within which matooke can be harvested to produce a consistent quality of matooke flour, ii) establish the sorption isotherms for matooke, iii) establish the effect of process parameters on drying characteristics of matooke, iv) optimize the drying process parameters for matooke, v) validate the models of maturity and optimum process parameters and vi) standardize process parameters for commercial processing of matooke. Samples were obtained from a banana plantation at Presidential Initiative on Banana Industrial Development (PIBID), Technology Business Incubation Center (TBI) at Nyaruzunga – Bushenyi in Western Uganda. A completely randomized design (CRD) was employed in selecting the banana stools from which samples for the experiments were picked. The cultivar Mbwazirume which is soft cooking and commonly grown in Bushenyi was selected for the study. The static gravitation method recommended by COST 90 Project (Wolf et al., 1985), was used for determination of moisture sorption isotherms. A research dryer developed for this research. All experiments were carried out in laboratories at TBI. The physiological maturity of matooke cv. mbwazirume at Bushenyi is 21 weeks. The optimum harvest maturity window for commercial processing of matooke flour (Raw Tooke Flour - RTF) at Bushenyi is between 15-21 weeks. The finger weight model is recommended for farmers to estimate harvest maturity for matooke and the combined model of finger weight and pulp peel ratio is recommended for commercial processors. Matooke isotherms exhibited type II curve behavior which is characteristic of foodstuffs. The GAB model best described all the adsorption and desorption moisture isotherms. For commercial processing of matooke, in order to obtain a microbiologically shelf-stable dry product. It is recommended to dry it to moisture content below or equal to 10% (wb). The hysteresis phenomenon was exhibited by the moisture sorption isotherms for matooke. The isoteric heat of sorption for both adsorptions and desorption isotherms increased with decreased moisture content. The total isosteric heat of sorption for matooke: adsorption isotherm ranged from 4,586 – 2,386 kJ/kg and desorption isotherm from 18,194– 2,391 kJ/kg for equilibrium moisture content from 0.3 – 0.01 (db) respectively. The minimum energy required for drying matooke from 80 – 10% (wb) is 8,124 kJ/kg of water removed. Implying that the minimum energy required for drying of 1 kg of fresh matooke from 80 - 10% (wb) is 5,793 kJ. The drying of matooke takes place in three steps: the warm-up and the two falling rate periods. The drying rate constant for all processing parameters ranged from 5,793 kJ and effective diffusivity ranged from 1.5E-10 - 8.27E-10 m2/s. The activation energy (Ea) for matooke was 16.3kJ/mol (1,605 kJ/kg). Comparing the activation energy (Ea) with the net isosteric heat of sorption for desorption isotherm (qst) (1,297.62) at 0.1 (kg water/kg dry matter), indicated that Ea was higher than qst suggesting that moisture molecules travel in liquid form in matooke slices. The total color difference (ΔE*) between the fresh and dry samples, was lowest for effect of thickness of 7 mm, followed by air velocity of 6 m/s, and then drying air temperature at 70˚C. The drying system controlled by set surface product temperature, reduced the drying time by 50% compared to that of a drying system controlled by set air drying temperature. The processing parameters did not have a significant effect on physicochemical and quality attributes, suggesting that any drying air temperature can be used in the initial stages of drying as long as the product temperature does not exceed gelatinization temperature of matooke (72˚C). The optimum processing parameters for single-layer drying of matooke are: thickness = 3 mm, air temperatures 70˚C, dew point temperature 18˚C and air velocity 6 m/s overflow mode. From practical point of view it is recommended that for commercial processing of matooke, to employ multi-layer drying of loading capacity equal or less than 7 kg/m², thickness 3 mm, air temperatures 70˚C, dew point temperature 18˚C and air velocity 6 m/s overflow mode.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

All intelligence relies on search --- for example, the search for an intelligent agent's next action. Search is only likely to succeed in resource-bounded agents if they have already been biased towards finding the right answer. In artificial agents, the primary source of bias is engineering. This dissertation describes an approach, Behavior-Oriented Design (BOD) for engineering complex agents. A complex agent is one that must arbitrate between potentially conflicting goals or behaviors. Behavior-oriented design builds on work in behavior-based and hybrid architectures for agents, and the object oriented approach to software engineering. The primary contributions of this dissertation are: 1.The BOD architecture: a modular architecture with each module providing specialized representations to facilitate learning. This includes one pre-specified module and representation for action selection or behavior arbitration. The specialized representation underlying BOD action selection is Parallel-rooted, Ordered, Slip-stack Hierarchical (POSH) reactive plans. 2.The BOD development process: an iterative process that alternately scales the agent's capabilities then optimizes the agent for simplicity, exploiting tradeoffs between the component representations. This ongoing process for controlling complexity not only provides bias for the behaving agent, but also facilitates its maintenance and extendibility. The secondary contributions of this dissertation include two implementations of POSH action selection, a procedure for identifying useful idioms in agent architectures and using them to distribute knowledge across agent paradigms, several examples of applying BOD idioms to established architectures, an analysis and comparison of the attributes and design trends of a large number of agent architectures, a comparison of biological (particularly mammalian) intelligence to artificial agent architectures, a novel model of primate transitive inference, and many other examples of BOD agents and BOD development.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Memory errors are a common cause of incorrect software execution and security vulnerabilities. We have developed two new techniques that help software continue to execute successfully through memory errors: failure-oblivious computing and boundless memory blocks. The foundation of both techniques is a compiler that generates code that checks accesses via pointers to detect out of bounds accesses. Instead of terminating or throwing an exception, the generated code takes another action that keeps the program executing without memory corruption. Failure-oblivious code simply discards invalid writes and manufactures values to return for invalid reads, enabling the program to continue its normal execution path. Code that implements boundless memory blocks stores invalid writes away in a hash table to return as the values for corresponding out of bounds reads. he net effect is to (conceptually) give each allocated memory block unbounded size and to eliminate out of bounds accesses as a programming error. We have implemented both techniques and acquired several widely used open source servers (Apache, Sendmail, Pine, Mutt, and Midnight Commander).With standard compilers, all of these servers are vulnerable to buffer overflow attacks as documented at security tracking web sites. Both failure-oblivious computing and boundless memory blocks eliminate these security vulnerabilities (as well as other memory errors). Our results show that our compiler enables the servers to execute successfully through buffer overflow attacks to continue to correctly service user requests without security vulnerabilities.