935 resultados para test data generation
Resumo:
Ce mémoire rapporte l’optimisation et l’évaluation d’une nouvelle version du test PAMPA (Parallel Artificial Membrane Permeability Assay) appelée Néo-PAMPA. Ce test qui permet la prédiction de l’absorption intestinale de médicaments consiste en l’utilisation d’une membrane modèle de la paroi intestinale composée d’une bicouche lipidique déposée sur un coussin de polydopamine recouvrant un filtre poreux. En effet, nous nous sommes intéressés lors de ce projet à la mise en place d’une membrane artificielle qui serait plus représentative de la paroi intestinale humaine. Nous avons pu déterminer, suite à une étude comparative des propriétés de huit médicaments ainsi que les coefficients de perméabilité obtenus, que les filtres en polycarbonate présentaient le meilleur choix de support solide pour la membrane. Nous avons également vérifié la déposition du coussin de polydopamine qui apporte le caractère fluide à la bicouche lipidique. Les résultats des tests de perméabilité ont démontré que le coussin de polymère n’obstrue pas les pores du filtre après un dépôt de 4h. Nous avons par la suite étudié la déposition de la bicouche lipidique sur le filtre recouvert de polydopamine. Pour ce faire, deux méthodes de préparation de liposomes ainsi que plusieurs tailles de liposomes ont été testées. Aussi, la composition en phospholipides a été sujette à plusieurs changements. Tous ces travaux d’optimisation ont permis d’aboutir à des liposomes préparés selon la méthode du « film lipidique » à partir d’un mélange de dioléoylphosphatidylcholine (DOPC) et de cholestérol. Une dernière étape d’optimisation de la déposition de la bicouche reste à améliorer. Enfin, le test standard Caco-2, qui consiste à évaluer la perméabilité des médicaments à travers une monocouche de cellules cancéreuses du colon humain, a été implémenté avec succès dans le but de comparer des données de perméabilité avec un test de référence.
Resumo:
La malhonnêteté académique au cours d’épreuves présente des enjeux importants quant à l’intégrité des évaluations. La présence des TIC étant de plus en plus importante en cours de passation dans les épreuves, il est important avec ce mode de récolte de données d’assurer un niveau de sécurité égal ou même supérieur à celui présent lorsqu’un mode de récolte de données traditionnel, le papier-crayon, est utilisé. Il existe plusieurs recherches sur l’utilisation des TIC dans l’évaluation, mais peu d’entre elles traitent des modalités de sécurité lors de l’utilisation des TIC. Dans ce mémoire, treize organisations québécoises ont été rencontrées: six qui utilisaient les TIC dans la passation, cinq qui utilisaient le papier-crayon dans la passation mais qui désiraient utiliser les TIC et deux qui utilisaient le papier-crayon et qui ne désiraient pas utiliser les TIC. Les organisations sont des établissements d’enseignement (primaire, secondaire, collégial, universitaire), des entreprises privées, des organismes gouvernementaux ou municipaux et des ordres professionnels. Des entrevues semi-structurées et une analyse qualitative par présence ou absence de différentes caractéristiques ont permis de documenter les modalités de sécurité liées à la récolte de données en vue de l’évaluation en utilisant les TIC. Ces modalités ont été comparées à celles utilisées lors de l’utilisation du papier-crayon dans la récolte de données en vue de l’évaluation afin de voir comment elles varient lors de l’utilisation des TIC. Les résultats révèlent que l’utilisation des TIC dans la passation complexifie et ajoute des étapes à la préparation des épreuves pour assurer un niveau de sécurité adéquat. Cependant elle permet également de nouvelles fonctions en ce qui concerne le type de questions, l’intégration de multimédia, l’utilisation de questions adaptatives et la génération aléatoire de l’épreuve qui permettent de contrer certaines formes de malhonnêteté académiques déjà présentes avec l’utilisation du papier-crayon dans la passation et pour lesquelles il était difficile d’agir. Toutefois, l’utilisation des TIC dans la passation peut aussi amener de nouvelles possibilités de malhonnêteté académique. Mais si ces dernières sont bien prises en considération, l’utilisation des TIC permet un niveau de sécurité des épreuves supérieur à celui où les données sont récoltées au traditionnel papier-crayon en vue de l’évaluation.
Resumo:
La construction modulaire est une stratégie émergente qui permet la fabrication de matériaux ordonnés à l’échelle atomique. Elle consiste en l’association programmée de sous-unités moléculaires via des sites réactifs judicieusement sélectionnés. L’application de cette stratégie a d’ores et déjà produit des matériaux aux propriétés remarquables, notamment les réseaux organiques covalents, dans lesquels des atomes de carbone et d’autres éléments légers sont liés de manière covalente. Bien que des matériaux assemblés par des interactions non-covalentes puissent être préparés sous la forme de monocristaux macroscopiques de cette façon, ceci n’était pas possible dans le cas des réseaux organiques covalents. Afin de pallier cette lacune, nous avons choisi d’étudier des réactions de polymérisation réversibles ayant lieu par un mécanisme d’addition. En effet, l’hypothèse de départ de cette thèse suppose qu’un tel processus émule le phénomène de cristallisation classique – régi par des interactions non-covalentes – et favorise la formation de monocristaux de dimensions importantes. Pour tester la validité de cette hypothèse, nous avons choisi d’étudier la polymérisation des composés polynitroso aromatiques puisque la dimérisation des nitrosoarènes est réversible et procède par addition. Dans un premier temps, nous avons revu en profondeur la littérature portant sur la dimérisation des nitrosoarènes. À partir des données alors recueillies, nous avons conçu, dans un deuxième temps, une série de composés polynitroso ayant le potentiel de former des réseaux organiques covalents bi- et tridimensionnels. Les paramètres thermodynamiques propres à leur polymérisation ont pu être estimés grâce à l’étude de composés mononitroso modèles. Dans un troisième temps, nous avons synthétisé les divers composés polynitroso visés par notre étude. Pour y parvenir, nous avons eu à développer une nouvelle méthodologie de synthèse des poly(N-arylhydroxylamines) – les précurseurs directs aux composés polynitroso. Dans un quatrième temps, nous avons étudié la polymérisation des composés polynitroso. En dépit de difficultés d’ordre pratique causées par la polymérisation spontanée de ces composés, nous avons pu identifier les conditions propices à leur polymérisation en réseaux organiques covalents hautement cristallins. Plusieurs nouveaux réseaux covalents tridimensionnels ont ainsi été produits sous la forme de monocristaux de dimensions variant entre 30 µm et 500 µm, confirmant la validité de notre hypothèse de départ. Il a par conséquent été possible de résoudre la structure de ces cristaux par diffraction de rayons X sur monocristal, ce qui n’avait jamais été possible dans le passé pour ce genre de matériau. Ces cristaux sont remarquablement uniformes et les polymères qui les composent ont des masses moléculaires extrêmement élevées (1014-1017 g/mol). Toutefois, la polymérisation de la majorité des composés polynitroso étudiés a plutôt conduit à des solides amorphes ou à des solides cristallins constitués de la forme monomérique de ces composés. D’autres composés nitroso modèles ont alors été préparés afin d’expliquer ce comportement, et des hypothèses ont été émises à partir des données alors recueillies. Enfin, les structures de plusieurs composés polynitroso ayant cristallisés sous une forme monomérique ont été analysés en détails par diffraction des rayons X. Notre stratégie, qui consiste en l’utilisation de monomères ayant la capacité de polymériser spontanément par un processus d’addition réversible, semble donc prometteuse pour obtenir de nouveaux réseaux covalents monocristallins à partir de composés polynitroso ou d’autres monomères de nature similaire. De plus, les résultats présentés au cours de cette thèse établissent un lien entre la science des polymères et la chimie supramoléculaire, en illustrant comment des structures ordonnées, covalentes ou non covalentes, peuvent toutes deux être construites de façon prévisible.
Resumo:
Le problème d'allocation de postes d'amarrage (PAPA) est l'un des principaux problèmes de décision aux terminaux portuaires qui a été largement étudié. Dans des recherches antérieures, le PAPA a été reformulé comme étant un problème de partitionnement généralisé (PPG) et résolu en utilisant un solveur standard. Les affectations (colonnes) ont été générées a priori de manière statique et fournies comme entrée au modèle %d'optimisation. Cette méthode est capable de fournir une solution optimale au problème pour des instances de tailles moyennes. Cependant, son inconvénient principal est l'explosion du nombre d'affectations avec l'augmentation de la taille du problème, qui fait en sorte que le solveur d'optimisation se trouve à court de mémoire. Dans ce mémoire, nous nous intéressons aux limites de la reformulation PPG. Nous présentons un cadre de génération de colonnes où les affectations sont générées de manière dynamique pour résoudre les grandes instances du PAPA. Nous proposons un algorithme de génération de colonnes qui peut être facilement adapté pour résoudre toutes les variantes du PAPA en se basant sur différents attributs spatiaux et temporels. Nous avons testé notre méthode sur un modèle d'allocation dans lequel les postes d'amarrage sont considérés discrets, l'arrivée des navires est dynamique et finalement les temps de manutention dépendent des postes d'amarrage où les bateaux vont être amarrés. Les résultats expérimentaux des tests sur un ensemble d'instances artificielles indiquent que la méthode proposée permet de fournir une solution optimale ou proche de l'optimalité même pour des problème de très grandes tailles en seulement quelques minutes.
Resumo:
La détection et la caractérisation des nanoparticules manufacturées (NPM) est l’une des premières étapes pour contrôler et diminuer leurs risques potentiels sur la santé humaine et l’environnement. Différents systèmes d’échantillonnage dans l’air existent pour l’évaluation d’une exposition aux NPM. Cependant, ils ne mesurent pas le risque potentiel de cette exposition à la santé humaine ni les mécanismes cellulaires qui en seraient responsables. Nos objectifs de recherche sont 1) Évaluer les effets de différents types de nanoparticules sur des cellules pulmonaires humaines et 2) Identifier de nouveaux mécanismes intracellulaires activés lors de l’exposition à divers types de NPM. Méthodologie: La lignée de cellules A549 a été utilisée. Trois types de NPM ont été étudiés (différentes concentrations et temps d’exposition): les nanoparticules de dioxyde de titane de type anatase (TiO2), les nanotubes de carbone simple paroi (NTCSP) et les nanoparticules de noir de carbone (NC). La viabilité cellulaire a été mesurée par le test MTS, le test PrestoBlue et le test d’exclusion du bleu de Trypan (uniquement pour les NTCSP). La mesure du stress oxydatif a été déterminée par la mesure des dérivés réactifs de l’oxygène (ROS) en utilisant l’essai DCFH-DA. L’activation d’une réponse anti-oxydative a été déterminée par la mesure de la forme réduite (GSH) et oxydée (GSSG) du glutathion, ainsi que du ratio GSH/GSSG (seulement avec NTCSP et TiO2). Résultats: Les trois nanoparticules ne semblent pas être toxiques pour les cellules A549 car il y a une diminution significative mais minime de la viabilité cellulaire. Cependant, elles induisent une augmentation du contenu intracellulaire en ROS qui est à la fois dépendante du temps et de la concentration. Aucun changement dans les concentrations de GSH et GSSG n’a été observé. En conclusion, nos données indiquent que la mesure de la viabilité n’est pas un critère suffisant pour conclure à la toxicité des NPM. La production de ROS est un critère intéressant, cependant il faudra démontrer l’activation de systèmes anti-oxydatifs pour expliquer l’absence de mortalité cellulaire suite à l’exposition aux NPM.
Resumo:
Epilepsy is a syndrome of episodic brain dysfunction characterized by recurrent unpredictable, spontaneous seizures. Cerebellar dysfunction is a recognized complication of temporal lobe epilepsy and it is associated with seizure generation, motor deficits and memory impairment. Serotonin is known to exert a modulatory action on cerebellar function through 5HT2C receptors. 5-HT2C receptors are novel targets for developing anticonvulsant drugs. In the present study, we investigated the changes in the 5-HT2C receptors binding and gene expression in the cerebellum of control, epileptic and Bacopa monnieri treated epileptic rats. There was a significant down regulation of the 5-HT content (pb0.001), 5-HT2C gene expression (pb0.001) and 5-HT2C receptor binding (pb0.001) with an increased affinity (pb0.001). Carbamazepine and B. monnieri treatments to epileptic rats reversed the down regulated 5-HT content (pb0.01), 5-HT2C receptor binding (pb0.001) and gene expression (pb0.01) to near control level. Also, the Rotarod test confirms the motor dysfunction and recovery by B. monnieri treatment. These data suggest the neuroprotective role of B. monnieri through the upregulation of 5-HT2C receptor in epileptic rats. This has clinical significance in the management of epilepsy
Resumo:
Neural Network has emerged as the topic of the day. The spectrum of its application is as wide as from ECG noise filtering to seismic data analysis and from elementary particle detection to electronic music composition. The focal point of the proposed work is an application of a massively parallel connectionist model network for detection of a sonar target. This task is segmented into: (i) generation of training patterns from sea noise that contains radiated noise of a target, for teaching the network;(ii) selection of suitable network topology and learning algorithm and (iii) training of the network and its subsequent testing where the network detects, in unknown patterns applied to it, the presence of the features it has already learned in. A three-layer perceptron using backpropagation learning is initially subjected to a recursive training with example patterns (derived from sea ambient noise with and without the radiated noise of a target). On every presentation, the error in the output of the network is propagated back and the weights and the bias associated with each neuron in the network are modified in proportion to this error measure. During this iterative process, the network converges and extracts the target features which get encoded into its generalized weights and biases.In every unknown pattern that the converged network subsequently confronts with, it searches for the features already learned and outputs an indication for their presence or absence. This capability for target detection is exhibited by the response of the network to various test patterns presented to it.Three network topologies are tried with two variants of backpropagation learning and a grading of the performance of each combination is subsequently made.
Resumo:
One major component of power system operation is generation scheduling. The objective of the work is to develop efficient control strategies to the power scheduling problems through Reinforcement Learning approaches. The three important active power scheduling problems are Unit Commitment, Economic Dispatch and Automatic Generation Control. Numerical solution methods proposed for solution of power scheduling are insufficient in handling large and complex systems. Soft Computing methods like Simulated Annealing, Evolutionary Programming etc., are efficient in handling complex cost functions, but find limitation in handling stochastic data existing in a practical system. Also the learning steps are to be repeated for each load demand which increases the computation time.Reinforcement Learning (RL) is a method of learning through interactions with environment. The main advantage of this approach is it does not require a precise mathematical formulation. It can learn either by interacting with the environment or interacting with a simulation model. Several optimization and control problems have been solved through Reinforcement Learning approach. The application of Reinforcement Learning in the field of Power system has been a few. The objective is to introduce and extend Reinforcement Learning approaches for the active power scheduling problems in an implementable manner. The main objectives can be enumerated as:(i) Evolve Reinforcement Learning based solutions to the Unit Commitment Problem.(ii) Find suitable solution strategies through Reinforcement Learning approach for Economic Dispatch. (iii) Extend the Reinforcement Learning solution to Automatic Generation Control with a different perspective. (iv) Check the suitability of the scheduling solutions to one of the existing power systems.First part of the thesis is concerned with the Reinforcement Learning approach to Unit Commitment problem. Unit Commitment Problem is formulated as a multi stage decision process. Q learning solution is developed to obtain the optimwn commitment schedule. Method of state aggregation is used to formulate an efficient solution considering the minimwn up time I down time constraints. The performance of the algorithms are evaluated for different systems and compared with other stochastic methods like Genetic Algorithm.Second stage of the work is concerned with solving Economic Dispatch problem. A simple and straight forward decision making strategy is first proposed in the Learning Automata algorithm. Then to solve the scheduling task of systems with large number of generating units, the problem is formulated as a multi stage decision making task. The solution obtained is extended in order to incorporate the transmission losses in the system. To make the Reinforcement Learning solution more efficient and to handle continuous state space, a fimction approximation strategy is proposed. The performance of the developed algorithms are tested for several standard test cases. Proposed method is compared with other recent methods like Partition Approach Algorithm, Simulated Annealing etc.As the final step of implementing the active power control loops in power system, Automatic Generation Control is also taken into consideration.Reinforcement Learning has already been applied to solve Automatic Generation Control loop. The RL solution is extended to take up the approach of common frequency for all the interconnected areas, more similar to practical systems. Performance of the RL controller is also compared with that of the conventional integral controller.In order to prove the suitability of the proposed methods to practical systems, second plant ofNeyveli Thennal Power Station (NTPS IT) is taken for case study. The perfonnance of the Reinforcement Learning solution is found to be better than the other existing methods, which provide the promising step towards RL based control schemes for practical power industry.Reinforcement Learning is applied to solve the scheduling problems in the power industry and found to give satisfactory perfonnance. Proposed solution provides a scope for getting more profit as the economic schedule is obtained instantaneously. Since Reinforcement Learning method can take the stochastic cost data obtained time to time from a plant, it gives an implementable method. As a further step, with suitable methods to interface with on line data, economic scheduling can be achieved instantaneously in a generation control center. Also power scheduling of systems with different sources such as hydro, thermal etc. can be looked into and Reinforcement Learning solutions can be achieved.
Resumo:
Wind energy has emerged as a major sustainable source of energy.The efficiency of wind power generation by wind mills has improved a lot during the last three decades.There is still further scope for maximising the conversion of wind energy into mechanical energy.In this context,the wind turbine rotor dynamics has great significance.The present work aims at a comprehensive study of the Horizontal Axis Wind Turbine (HAWT) aerodynamics by numerically solving the fluid dynamic equations with the help of a finite-volume Navier-Stokes CFD solver.As a more general goal,the study aims at providing the capabilities of modern numerical techniques for the complex fluid dynamic problems of HAWT.The main purpose is hence to maximize the physics of power extraction by wind turbines.This research demonstrates the potential of an incompressible Navier-Stokes CFD method for the aerodynamic power performance analysis of horizontal axis wind turbine.The National Renewable Energy Laboratory USA-NREL (Technical Report NREL/Cp-500-28589) had carried out an experimental work aimed at the real time performance prediction of horizontal axis wind turbine.In addition to a comparison between the results reported by NREL made and CFD simulations,comparisons are made for the local flow angle at several stations ahead of the wind turbine blades.The comparison has shown that fairly good predictions can be made for pressure distribution and torque.Subsequently, the wind-field effects on the blade aerodynamics,as well as the blade/tower interaction,were investigated.The selected case corresponded to a 12.5 m/s up-wind HAWT at zero degree of yaw angle and a rotational speed of 25 rpm.The results obtained suggest that the present can cope well with the flows encountered around wind turbines.The areodynamic performance of the turbine and the flow details near and off the turbine blades and tower can be analysed using theses results.The aerodynamic performance of airfoils differs from one another.The performance mainly depends on co-efficient of performnace,co-efficient of lift,co-efficient of drag, velocity of fluid and angle of attack.This study shows that the velocity is not constant for all angles of attack of different airfoils.The performance parameters are calculated analytically and are compared with the standardized performance tests.For different angles of ,the velocity stall is determined for the better performance of a system with respect to velocity.The research addresses the effect of surface roughness factor on the blade surface at various sections.The numerical results were found to be in agreement with the experimental data.A relative advantage of the theoretical aerofoil design method is that it allows many different concepts to be explored economically.Such efforts are generally impractical in wind tunnels because of time and money constraints.Thus, the need for a theoretical aerofoil design method is threefold:first for the design of aerofoil that fall outside the range of applicability of existing calalogs:second,for the design of aerofoil that more exactly match the requirements of the intended application:and third,for the economic exploration of many aerofoil concepts.From the results obtained for the different aerofoils,the velocity is not constant for all angles of attack.The results obtained for the aerofoil mainly depend on angle of attack and velocity.The vortex generator technique was meticulously studies with the formulation of the specification for the right angle shaped vortex generators-VG.The results were validated in accordance with the primary analysis phase.The results were found to be in good agreement with the power curve.The introduction of correct size VGs at appropriate locations over the blades of the selected HAWT was found to increase the power generation by about 4%
Resumo:
Bank switching in embedded processors having partitioned memory architecture results in code size as well as run time overhead. An algorithm and its application to assist the compiler in eliminating the redundant bank switching codes introduced and deciding the optimum data allocation to banked memory is presented in this work. A relation matrix formed for the memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Data allocation to memory is done by considering all possible permutation of memory banks and combination of data. The compiler output corresponding to each data mapping scheme is subjected to a static machine code analysis which identifies the one with minimum number of bank switching codes. Even though the method is compiler independent, the algorithm utilizes certain architectural features of the target processor. A prototype based on PIC 16F87X microcontrollers is described. This method scales well into larger number of memory blocks and other architectures so that high performance compilers can integrate this technique for efficient code generation. The technique is illustrated with an example
Resumo:
Diabetes mellitus is a heterogeneous metabolic disorder characterized by hyperglycemia with disturbances in carbohydrate, protein and lipid metabolism resulting from defects in insulin secretion, insulin action or both. Currently there are 387 million people with diabetes worldwide and is expected to affect 592 million people by 2035. Insulin resistance in peripheral tissues and pancreatic beta cell dysfunction are the major challenges in the pathophysiology of diabetes. Diabetic secondary complications (like liver cirrhosis, retinopathy, microvascular and macrovascular complications) arise from persistent hyperglycemia and dyslipidemia can be disabling or even life threatening. Current medications are effective for control and management of hyperglycemia but undesirable effects, inefficiency against secondary complications and high cost are still serious issues in the present prognosis of this disorder. Hence the search for more effective and safer therapeutic agents of natural origin has been found to be highly demanding and attract attention in the present drug discovery research. The data available from Ayurveda on various medicinal plants for treatment of diabetes can efficiently yield potential new lead as antidiabetic agents. For wider acceptability and popularity of herbal remedies available in Ayurveda scientific validation by the elucidation of mechanism of action is very much essential. Modern biological techniques are available now to elucidate the biochemical basis of the effectiveness of these medicinal plants. Keeping this idea the research programme under this thesis has been planned to evaluate the molecular mechanism responsible for the antidiabetic property of Symplocos cochinchinensis, the main ingredient of Nishakathakadi Kashayam, a wellknown Ayurvedic antidiabetic preparation. A general introduction of diabetes, its pathophysiology, secondary complications and current treatment options, innovative solutions based on phytomedicine etc has been described in Chapter 1. The effect of Symplocos cochinchinensis (SC), on various in vitro biochemical targets relevant to diabetes is depicted in Chapter 2 including the preparation of plant extract. Since diabetes is a multifactorial disease, ethanolic extract of the bark of SC (SCE) and its fractions (hexane, dichloromethane, ethyl acetate and 90 % ethanol) were evaluated by in vitro methods against multiple targets such as control of postprandial hyperglycemia, insulin resistance, oxidative stress, pancreatic beta cell proliferation, inhibition of protein glycation, protein tyrosine phosphatase-1B (PTP-1B) and dipeptidyl peptidase-IV (DPPxxi IV). Among the extracts, SCE exhibited comparatively better activity like alpha glucosidase inhibition, insulin dependent glucose uptake (3 fold increase) in L6 myotubes, pancreatic beta cell regeneration in RIN-m5F and reduced triglyceride accumulation in 3T3-L1 cells, protection from hyperglycemia induced generation of reactive oxygen species in HepG2 cells with moderate antiglycation and PTP-1B inhibition. Chemical characterization by HPLC revealed the superiority of SCE over other extracts due to presence of bioactives (beta-sitosterol, phloretin 2’glucoside, oleanolic acid) in addition to minerals like magnesium, calcium, potassium, sodium, zinc and manganese. So SCE has been subjected to oral sucrose tolerance test (OGTT) to evaluate its antihyperglycemic property in mild diabetic and diabetic animal models. SCE showed significant antihyperglycemic activity in in vivo diabetic models. Chapter 3 highlights the beneficial effects of hydroethanol extract of Symplocos cochinchinensis (SCE) against hyperglycemia associated secondary complications in streptozotocin (60 mg/kg body weight) induced diabetic rat model. Proper sanction had been obtained for all the animal experiments from CSIR-CDRI institutional animal ethics committee. The experimental groups consist of normal control (NC), N + SCE 500 mg/kg bwd, diabetic control (DC), D + metformin 100 mg/kg bwd, D + SCE 250 and D + SCE 500. SCEs and metformin were administered daily for 21 days and sacrificed on day 22. Oral glucose tolerance test, plasma insulin, % HbA1c, urea, creatinine, aspartate aminotransferase (AST), alanine aminotransferase (ALT), albumin, total protein etc. were analysed. Aldose reductase (AR) activity in the eye lens was also checked. On day 21, DC rats showed significantly abnormal glucose response, HOMA-IR, % HbA1c, decreased activity of antioxidant enzymes and GSH, elevated AR activity, hepatic and renal oxidative stress markers compared to NC. DC rats also exhibited increased level of plasma urea and creatinine. Treatment with SCE protected from the deleterious alterations of biochemical parameters in a dose dependent manner including histopathological alterations in pancreas. SCE 500 exhibited significant glucose lowering effect and decreased HOMA-IR, % HbA1c, lens AR activity, and hepatic, renal oxidative stress and function markers compared to DC group. Considerable amount of liver and muscle glycogen was replenished by SCE treatment in diabetic animals. Although metformin showed better effect, the activity of SCE was very much comparable with this drug. xxii The possible molecular mechanism behind the protective property of S. cochinchinensis against the insulin resistance in peripheral tissue as well as dyslipidemia in in vivo high fructose saturated fat diet model is described in Chapter 4. Initially animal were fed a high fructose saturated fat (HFS) diet for a period of 8 weeks to develop insulin resistance and dyslipidemia. The normal diet control (ND), ND + SCE 500 mg/kg bwd, high fructose saturated fat diet control (HFS), HFS + metformin 100 mg/kg bwd, HFS + SCE 250 and HFS + SCE 500 were the experimental groups. SCEs and metformin were administered daily for the next 3 weeks and sacrificed at the end of 11th week. At the end of week 11, HFS rats showed significantly abnormal glucose and insulin tolerance, HOMA-IR, % HbA1c, adiponectin, lipid profile, liver glycolytic and gluconeogenic enzyme activities, liver and muscle triglyceride accumulation compared to ND. HFS rats also exhibited increased level of plasma inflammatory cytokines, upregulated mRNA level of gluconeogenic and lipogenic genes in liver. HFS exhibited the increased expression of GLUT-2 in liver and decreased expression of GLUT-4 in muscle and adipose. SCE treatment also preserved the architecture of pancreas, liver, and kidney tissues. Treatment with SCE reversed the alterations of biochemical parameters, improved insulin sensitivity by modifying gene expression in liver, muscle and adipose tissues. Overall results suggest that SC mediates the antidiabetic activity mainly via alpha glucosidase inhibition, improved insulin sensitivity, with antiglycation and antioxidant activities.
Resumo:
Genetic programming is known to provide good solutions for many problems like the evolution of network protocols and distributed algorithms. In such cases it is most likely a hardwired module of a design framework that assists the engineer to optimize specific aspects of the system to be developed. It provides its results in a fixed format through an internal interface. In this paper we show how the utility of genetic programming can be increased remarkably by isolating it as a component and integrating it into the model-driven software development process. Our genetic programming framework produces XMI-encoded UML models that can easily be loaded into widely available modeling tools which in turn posses code generation as well as additional analysis and test capabilities. We use the evolution of a distributed election algorithm as an example to illustrate how genetic programming can be combined with model-driven development. This example clearly illustrates the advantages of our approach – the generation of source code in different programming languages.
Resumo:
In spite of being the second largest immigrant group in the United Kingdom, Pakistanis are still one of the most disadvantaged immigrant groups with respect to labour market integration. Hence, dealing with their labour market integration is the first step to improve it. This paper compares second generation Pakistanis in the United Kingdom with their British peers and analyses, whether the gap between the two ethnicities with respect to labour market integration decreased or not. Both groups in the analysis were born in the United Kingdom and possess British nationality. The only difference is the ethnicity; while Pakistanis have Pakistani ethnicity; British people have “white” ethnicity. The analysis covers people whose age are between 18 and 33 years old and compares the time period of December 1993-February 1995 and December 2004-February 2006. To carry out this analysis, I operationalise labour market integration as employment chance and utilise the United Kingdom Quarterly Labour Force Survey data. Empirical findings show that the gap between the labour market integration of second generation Pakistanis and their British peers in the sample did not change significantly from 1994 to 2005.
Resumo:
The principal objective of this paper is to develop a methodology for the formulation of a master plan for renewable energy based electricity generation in The Gambia, Africa. Such a master plan aims to develop and promote renewable sources of energy as an alternative to conventional forms of energy for generating electricity in the country. A tailor-made methodology for the preparation of a 20-year renewable energy master plan focussed on electricity generation is proposed in order to be followed and verified throughout the present dissertation, as it is applied for The Gambia. The main input data for the proposed master plan are (i) energy demand analysis and forecast over 20 years and (ii) resource assessment for different renewable energy alternatives including their related power supply options. The energy demand forecast is based on a mix between Top-Down and Bottom-Up methodologies. The results are important data for future requirements of (primary) energy sources. The electricity forecast is separated in projections at sent-out level and at end-user level. On the supply side, Solar, Wind and Biomass, as sources of energy, are investigated in terms of technical potential and economic benefits for The Gambia. Other criteria i.e. environmental and social are not considered in the evaluation. Diverse supply options are proposed and technically designed based on the assessed renewable energy potential. This process includes the evaluation of the different available conversion technologies and finalizes with the dimensioning of power supply solutions, taking into consideration technologies which are applicable and appropriate under the special conditions of The Gambia. The balance of these two input data (demand and supply) gives a quantitative indication of the substitution potential of renewable energy generation alternatives in primarily fossil-fuel-based electricity generation systems, as well as fuel savings due to the deployment of renewable resources. Afterwards, the identified renewable energy supply options are ranked according to the outcomes of an economic analysis. Based on this ranking, and other considerations, a 20-year investment plan, broken down into five-year investment periods, is prepared and consists of individual renewable energy projects for electricity generation. These projects included basically on-grid renewable energy applications. Finally, a priority project from the master plan portfolio is selected for further deeper analysis. Since solar PV is the most relevant proposed technology, a PV power plant integrated to the fossil-fuel powered main electrical system in The Gambia is considered as priority project. This project is analysed by economic competitiveness under the current conditions in addition to sensitivity analysis with regard to oil and new-technology market conditions in the future.
Resumo:
Kern der vorliegenden Arbeit ist die Erforschung von Methoden, Techniken und Werkzeugen zur Fehlersuche in modellbasierten Softwareentwicklungsprozessen. Hierzu wird zuerst ein von mir mitentwickelter, neuartiger und modellbasierter Softwareentwicklungsprozess, der sogenannte Fujaba Process, vorgestellt. Dieser Prozess wird von Usecase Szenarien getrieben, die durch spezielle Kollaborationsdiagramme formalisiert werden. Auch die weiteren Artefakte des Prozess bishin zur fertigen Applikation werden durch UML Diagrammarten modelliert. Es ist keine Programmierung im Quelltext nötig. Werkzeugunterstützung für den vorgestellte Prozess wird von dem Fujaba CASE Tool bereitgestellt. Große Teile der Werkzeugunterstützung für den Fujaba Process, darunter die Toolunterstützung für das Testen und Debuggen, wurden im Rahmen dieser Arbeit entwickelt. Im ersten Teil der Arbeit wird der Fujaba Process im Detail erklärt und unsere Erfahrungen mit dem Einsatz des Prozesses in Industrieprojekten sowie in der Lehre dargestellt. Der zweite Teil beschreibt die im Rahmen dieser Arbeit entwickelte Testgenerierung, die zu einem wichtigen Teil des Fujaba Process geworden ist. Hierbei werden aus den formalisierten Usecase Szenarien ausführbare Testfälle generiert. Es wird das zugrunde liegende Konzept, die konkrete technische Umsetzung und die Erfahrungen aus der Praxis mit der entwickelten Testgenerierung dargestellt. Der letzte Teil beschäftigt sich mit dem Debuggen im Fujaba Process. Es werden verschiedene im Rahmen dieser Arbeit entwickelte Konzepte und Techniken vorgestellt, die die Fehlersuche während der Applikationsentwicklung vereinfachen. Hierbei wurde darauf geachtet, dass das Debuggen, wie alle anderen Schritte im Fujaba Process, ausschließlich auf Modellebene passiert. Unter anderem werden Techniken zur schrittweisen Ausführung von Modellen, ein Objekt Browser und ein Debugger, der die rückwärtige Ausführung von Programmen erlaubt (back-in-time debugging), vorgestellt. Alle beschriebenen Konzepte wurden in dieser Arbeit als Plugins für die Eclipse Version von Fujaba, Fujaba4Eclipse, implementiert und erprobt. Bei der Implementierung der Plugins wurde auf eine enge Integration mit Fujaba zum einen und mit Eclipse auf der anderen Seite geachtet. Zusammenfassend wird also ein Entwicklungsprozess vorgestellt, die Möglichkeit in diesem mit automatischen Tests Fehler zu identifizieren und diese Fehler dann mittels spezieller Debuggingtechniken im Programm zu lokalisieren und schließlich zu beheben. Dabei läuft der komplette Prozess auf Modellebene ab. Für die Test- und Debuggingtechniken wurden in dieser Arbeit Plugins für Fujaba4Eclipse entwickelt, die den Entwickler bestmöglich bei der zugehörigen Tätigkeit unterstützen.