960 resultados para Improved sequential algebraic algorithm
Resumo:
The soybean is a protein source of high biological value. However, the presence of anti-nutritional factors affects its protein quality and limits the bioavailability of other nutrients. The effect of heat-treatment, 150 ºC for 30 minutes, on hulled and hull-less soybean flour from the cultivar UFVTN 105AP on urease, trypsin inhibitor activity, protein solubility, amino acid profile, and in vivo protein quality was investigated. The treatment reduced the trypsin inhibitor activity and urease, but it did not affect protein solubility. Protein Efficiency Coefficient (PER) values of the flours were similar, and the PER of the hull-less soybean flour did not differ from casein. The Net Protein Ratio (NPR) did not differ between the experimental groups. The True Digestibility (TD) of the flours did not differ, but both were lower in casein and the Protein Digestibility Corrected Amino Acid Score (PDCCAS) was lower than the TD, due to limited valine determined by the chemical score. Therefore, the flours showed reduced anti-nutritional phytochemicals and similar protein quality, and therefore the whole flours can be used as a source of high quality protein.
Resumo:
Significant initiatives exist within the global food market to search for new, alternative protein sources with better technological, functional, and nutritional properties. Lima bean (Phaseolus lunatus L.) protein isolate was hydrolyzed using a sequential pepsin-pancreatin enzymatic system. Hydrolysis was performed to produce limited (LH) and extensive hydrolysate (EH), each with different degrees of hydrolysis (DH). The effects of hydrolysis were evaluated in vitro in both hydrolysates based on structural, functional and bioactive properties. Structural properties analyzed by electrophoretic profile indicated that LH showed residual structures very similar to protein isolate (PI), although composed of mixtures of polypeptides that increased hydrophobic surface and denaturation temperature. Functionality of LH was associated with amino acid composition and hydrophobic/hydrophilic balance, which increased solubility at values close to the isoelectric point. Foaming and emulsifying activity index values were also higher than those of PI. EH showed a structure composed of mixtures of polypeptides and peptides of low molecular weight, whose intrinsic hydrophobicity and amino acid profile values were associated with antioxidant capacity, as well as inhibiting angiotensin-converting enzyme. The results obtained indicated the potential of Phaseolus lunatus hydrolysates to be incorporated into foods to improve techno-functional properties and impart bioactive properties.
Resumo:
This work presents synopsis of efficient strategies used in power managements for achieving the most economical power and energy consumption in multicore systems, FPGA and NoC Platforms. In this work, a practical approach was taken, in an effort to validate the significance of the proposed Adaptive Power Management Algorithm (APMA), proposed for system developed, for this thesis project. This system comprise arithmetic and logic unit, up and down counters, adder, state machine and multiplexer. The essence of carrying this project firstly, is to develop a system that will be used for this power management project. Secondly, to perform area and power synopsis of the system on these various scalable technology platforms, UMC 90nm nanotechnology 1.2v, UMC 90nm nanotechnology 1.32v and UMC 0.18 μmNanotechnology 1.80v, in order to examine the difference in area and power consumption of the system on the platforms. Thirdly, to explore various strategies that can be used to reducing system’s power consumption and to propose an adaptive power management algorithm that can be used to reduce the power consumption of the system. The strategies introduced in this work comprise Dynamic Voltage Frequency Scaling (DVFS) and task parallelism. After the system development, it was run on FPGA board, basically NoC Platforms and on these various technology platforms UMC 90nm nanotechnology1.2v, UMC 90nm nanotechnology 1.32v and UMC180 nm nanotechnology 1.80v, the system synthesis was successfully accomplished, the simulated result analysis shows that the system meets all functional requirements, the power consumption and the area utilization were recorded and analyzed in chapter 7 of this work. This work extensively reviewed various strategies for managing power consumption which were quantitative research works by many researchers and companies, it's a mixture of study analysis and experimented lab works, it condensed and presents the whole basic concepts of power management strategy from quality technical papers.
Resumo:
Recombinant human adenovirus (Ad) vectors are being extensively explored for their use in gene therapy and recombinant vaccines. Ad vectors are attractive for many reasons, including the fact that (1) they are relatively safe, based on their use as live oral vaccines, (2) they can accept large transgene inserts, (3) they can infect dividing and postmitotic cells, and (4) they can be produced to high titers. However, there are also a number of major problems associated with Ad vectors, including transient foreign gene expression due to host cellular immune responses, problems with humoral immunity, and the creation of replication competent adenoviruses (RCA). Most Ad vectors contain deletions in the E1 region that allow for insertion of a transgene. However, the E1 gene products are required for replication and thus must be supplied in trans by a helper ceillille that will allow for the growth and packaging of the defective virus. For this purpose the 293 cell line (Graham et al., 1977) is used most often; however, homologous recombination between the vector and the cell line often results in the generation of RCA. The presence of RCA in batches of adenoviral vectors for clinical use is a safety risk because tlley . may result in the mobilization and spread of the replication-defective vector viruses, and in significant tissue damage and pathogenicity. The present research focused on the alteration of the 293 cell line such that RCA formation can be eliminated. The strategy to modify the 293 cells involved the removal of the first 380 bp of the adenovirus genome through the process of homologous recombination. The first step towards this goal involved identifying and cloning the left-end cellular-viral jUl1ction from 293 cells to assemble sequences required for homologous recombination. Polymerase chain reaction (PCR) was performed to clone the junction, and the clone was verified through sequencing. The plasn1id PAM2 was then constructed, which served as the targeting cassette used to modify the 293 cells. The cassette consisted of (1) the cellular-viral junction as the left-end region of homology, (2) the neo gene to use for positive selection upon tranfection into 293 cells, (3) the adenoviral genome from bp 380 to bp 3438 as the right-end region of homology, and (4) the HSV-tk gene to use for negative selection. The plasmid PAM2 was linearized to produce a double strand break outside the region of homology, and transfected into 293 cells using the calcium-phosphate technique. Cells were first selected for their resistance to the drug G418, and subsequently for their resistance to the drug Gancyclovir (GANC). From 17 transfections, 100 pools of G418f and GANCf cells were picked using cloning lings and expanded for screening. Genomic DNA was isolated from the pools and screened for the presence of the 380 bps using PCR. Ten of the most promising pools were diluted to single cells and expanded in order to isolate homogeneous cell lines. From these, an additional 100 G41Sf and GANef foci were screened. These preliminary screening results appear promising for the detection of the desired cell line. Future work would include further cloning and purification of the promising cell lines that have potentially undergone homologous recombination, in order to isolate a homogeneous cell line of interest.
Resumo:
Narrative therapy is a postmodern therapy that takes the position that people create self-narratives to make sense of their experiences. To date, narrative therapy has compiled virtually no quantitative and very little qualitative research, leaving gaps in almost all areas of process and outcome. White (2006a), one of the therapy's founders, has recently utilized Vygotsky's (1934/1987) theories of the zone of proximal development (ZPD) and concept formation to describe the process of change in narrative therapy with children. In collaboration with the child client, the narrative therapist formalizes therapeutic concepts and submits them to increasing levels of generalization to create a ZPD. This study sought to determine whether the child's development proceeds through the stages of concept formation over the course of a session, and whether therapists' utterances scaffold this movement. A sequential analysis was used due to its unique ability to measure dynamic processes in social interactions. Stages of concept formation and scaffolding were coded over time. A hierarchical log-linear analysis was performed on the sequential data to develop a model of therapist scaffolding and child concept development. This was intended to determine what patterns occur and whether the stated intent of narrative therapy matches its actual process. In accordance with narrative therapy theory, the log-linear analysis produced a final model with interactions between therapist and child utterances, and between both therapist and child utterances and time. Specifically, the child and youth participants in therapy tended to respond to therapist scaffolding at the corresponding level of concept formation. Both children and youth and therapists also tended to move away from earlier and toward later stages of White's scaffolding conversations map as the therapy session advanced. These findings provide support for White's contention that narrative therapists promote child development by scaffolding child concept formation in therapy.
Resumo:
Flow injection analysis (FIA) was applied to the determination of both chloride ion and mercury in water. Conventional FIA was employed for the chloride study. Investigations of the Fe3 +/Hg(SCN)2/CI-,450 nm spectrophotometric system for chloride determination led to the discovery of an absorbance in the 250-260 nm region when Hg(SCN)2 and CI- are combined in solution, in the absence of iron(III). Employing an in-house FIA system, absorbance observed at 254 nm exhibited a linear relation from essentially 0 - 2000 Jlg ml- 1 injected chloride. This linear range spanning three orders of magnitude is superior to the Fe3+/Hg(SCN)2/CI- system currently employed by laboratories worldwide. The detection limit obtainable with the proposed method was determin~d to be 0.16 Jlg ml- 1 and the relative standard deviation was determined to be 3.5 % over the concentration range of 0-200 Jig ml- 1. Other halogen ions were found to interfere with chloride determination at 254 nm whereas cations did not interfere. This system was successfully applied to the determination of chloride ion in laboratory water. Sequential injection (SI)-FIA was employed for mercury determination in water with the PSA Galahad mercury amalgamation, and Merlin mercury fluorescence detection systems. Initial mercury in air determinations involved injections of mercury saturated air directly into the Galahad whereas mercury in water determinations involved solution delivery via peristaltic pump to a gas/liquid separator, after reduction by stannous chloride. A series of changes were made to the internal hardware and valving systems of the Galahad mercury preconcentrator. Sequential injection solution delivery replaced the continuous peristaltic pump system and computer control was implemented to control and integrate all aspects of solution delivery, sample preconcentration and signal processing. Detection limits currently obtainable with this system are 0.1 ng ml-1 HgO.
Resumo:
This thesis introduces the Salmon Algorithm, a search meta-heuristic which can be used for a variety of combinatorial optimization problems. This algorithm is loosely based on the path finding behaviour of salmon swimming upstream to spawn. There are a number of tunable parameters in the algorithm, so experiments were conducted to find the optimum parameter settings for different search spaces. The algorithm was tested on one instance of the Traveling Salesman Problem and found to have superior performance to an Ant Colony Algorithm and a Genetic Algorithm. It was then tested on three coding theory problems - optimal edit codes, optimal Hamming distance codes, and optimal covering codes. The algorithm produced improvements on the best known values for five of six of the test cases using edit codes. It matched the best known results on four out of seven of the Hamming codes as well as three out of three of the covering codes. The results suggest the Salmon Algorithm is competitive with established guided random search techniques, and may be superior in some search spaces.
Resumo:
Understanding the machinery of gene regulation to control gene expression has been one of the main focuses of bioinformaticians for years. We use a multi-objective genetic algorithm to evolve a specialized version of side effect machines for degenerate motif discovery. We compare some suggested objectives for the motifs they find, test different multi-objective scoring schemes and probabilistic models for the background sequence models and report our results on a synthetic dataset and some biological benchmarking suites. We conclude with a comparison of our algorithm with some widely used motif discovery algorithms in the literature and suggest future directions for research in this area.
Resumo:
Complex networks have recently attracted a significant amount of research attention due to their ability to model real world phenomena. One important problem often encountered is to limit diffusive processes spread over the network, for example mitigating pandemic disease or computer virus spread. A number of problem formulations have been proposed that aim to solve such problems based on desired network characteristics, such as maintaining the largest network component after node removal. The recently formulated critical node detection problem aims to remove a small subset of vertices from the network such that the residual network has minimum pairwise connectivity. Unfortunately, the problem is NP-hard and also the number of constraints is cubic in number of vertices, making very large scale problems impossible to solve with traditional mathematical programming techniques. Even many approximation algorithm strategies such as dynamic programming, evolutionary algorithms, etc. all are unusable for networks that contain thousands to millions of vertices. A computationally efficient and simple approach is required in such circumstances, but none currently exist. In this thesis, such an algorithm is proposed. The methodology is based on a depth-first search traversal of the network, and a specially designed ranking function that considers information local to each vertex. Due to the variety of network structures, a number of characteristics must be taken into consideration and combined into a single rank that measures the utility of removing each vertex. Since removing a vertex in sequential fashion impacts the network structure, an efficient post-processing algorithm is also proposed to quickly re-rank vertices. Experiments on a range of common complex network models with varying number of vertices are considered, in addition to real world networks. The proposed algorithm, DFSH, is shown to be highly competitive and often outperforms existing strategies such as Google PageRank for minimizing pairwise connectivity.
Resumo:
DNA assembly is among the most fundamental and difficult problems in bioinformatics. Near optimal assembly solutions are available for bacterial and small genomes, however assembling large and complex genomes especially the human genome using Next-Generation-Sequencing (NGS) technologies is shown to be very difficult because of the highly repetitive and complex nature of the human genome, short read lengths, uneven data coverage and tools that are not specifically built for human genomes. Moreover, many algorithms are not even scalable to human genome datasets containing hundreds of millions of short reads. The DNA assembly problem is usually divided into several subproblems including DNA data error detection and correction, contig creation, scaffolding and contigs orientation; each can be seen as a distinct research area. This thesis specifically focuses on creating contigs from the short reads and combining them with outputs from other tools in order to obtain better results. Three different assemblers including SOAPdenovo [Li09], Velvet [ZB08] and Meraculous [CHS+11] are selected for comparative purposes in this thesis. Obtained results show that this thesis’ work produces comparable results to other assemblers and combining our contigs to outputs from other tools, produces the best results outperforming all other investigated assemblers.
Resumo:
The KCube interconnection topology was rst introduced in 2010. The KCube graph is a compound graph of a Kautz digraph and hypercubes. Compared with the at- tractive Kautz digraph and well known hypercube graph, the KCube graph could accommodate as many nodes as possible for a given indegree (and outdegree) and the diameter of interconnection networks. However, there are few algorithms designed for the KCube graph. In this thesis, we will concentrate on nding graph theoretical properties of the KCube graph and designing parallel algorithms that run on this network. We will explore several topological properties, such as bipartiteness, Hamiltonianicity, and symmetry property. These properties for the KCube graph are very useful to develop efficient algorithms on this network. We will then study the KCube network from the algorithmic point of view, and will give an improved routing algorithm. In addition, we will present two optimal broadcasting algorithms. They are fundamental algorithms to many applications. A literature review of the state of the art network designs in relation to the KCube network as well as some open problems in this field will also be given.
Resumo:
Volume(density)-independent pair-potentials cannot describe metallic cohesion adequately as the presence of the free electron gas renders the total energy strongly dependent on the electron density. The embedded atom method (EAM) addresses this issue by replacing part of the total energy with an explicitly density-dependent term called the embedding function. Finnis and Sinclair proposed a model where the embedding function is taken to be proportional to the square root of the electron density. Models of this type are known as Finnis-Sinclair many body potentials. In this work we study a particular parametrization of the Finnis-Sinclair type potential, called the "Sutton-Chen" model, and a later version, called the "Quantum Sutton-Chen" model, to study the phonon spectra and the temperature variation thermodynamic properties of fcc metals. Both models give poor results for thermal expansion, which can be traced to rapid softening of transverse phonon frequencies with increasing lattice parameter. We identify the power law decay of the electron density with distance assumed by the model as the main cause of this behaviour and show that an exponentially decaying form of charge density improves the results significantly. Results for Sutton-Chen and our improved version of Sutton-Chen models are compared for four fcc metals: Cu, Ag, Au and Pt. The calculated properties are the phonon spectra, thermal expansion coefficient, isobaric heat capacity, adiabatic and isothermal bulk moduli, atomic root-mean-square displacement and Gr\"{u}neisen parameter. For the sake of comparison we have also considered two other models where the distance-dependence of the charge density is an exponential multiplied by polynomials. None of these models exhibits the instability against thermal expansion (premature melting) as shown by the Sutton-Chen model. We also present results obtained via pure pair potential models, in order to identify advantages and disadvantages of methods used to obtain the parameters of these potentials.
Resumo:
Ordered gene problems are a very common classification of optimization problems. Because of their popularity countless algorithms have been developed in an attempt to find high quality solutions to the problems. It is also common to see many different types of problems reduced to ordered gene style problems as there are many popular heuristics and metaheuristics for them due to their popularity. Multiple ordered gene problems are studied, namely, the travelling salesman problem, bin packing problem, and graph colouring problem. In addition, two bioinformatics problems not traditionally seen as ordered gene problems are studied: DNA error correction and DNA fragment assembly. These problems are studied with multiple variations and combinations of heuristics and metaheuristics with two distinct types or representations. The majority of the algorithms are built around the Recentering- Restarting Genetic Algorithm. The algorithm variations were successful on all problems studied, and particularly for the two bioinformatics problems. For DNA Error Correction multiple cases were found with 100% of the codes being corrected. The algorithm variations were also able to beat all other state-of-the-art DNA Fragment Assemblers on 13 out of 16 benchmark problem instances.
Resumo:
Exposure to isoflavones (ISO), abundant in soy protein infant formula, for the first 5 days of life results in higher bone mineral density (BMD),greater trabecular connectivity and higher fracture load of lumbar vertebrae (LV) at adulthood. The effect of lengthening the duration of exposure to ISO on bone development has not been studied. This study determined if providing ISO for the first 21 days of life, which more closely mimics the duration that infants are fed soy protein formula, results in higher BMD, improved bone structure and greater strength in femurs and LV than a 5-day protocol. Female CD-1 mice were randomized to subcutaneous injections of ISO (7 Q1 mg kg/body weight/day) or corn oil from postnatal day 1 to 21. BMD, structure and strength were measured at the femur and LV at 4 months of age, representing young Q2 adulthood. At the LV, exposure to ISO resulted in higher (P,0.05) BMD, trabecular connectivity and fracture load compared with control (CON). Exposure to ISO also resulted in higher (P,0.05) whole femur BMD, higher (P,0.05) bone volume/total volume and Q3 lower (P,0.05) trabecular separation at the femur neck, as well as greater (P,0.05) fracture load at femur midpoint and femur neck compared with the CON group. Exposure to ISO throughout suckling has favorable effects on LV outcomes, and, unlike previous studies using 5-day exposure to ISO, femur outcomes are also improved. Duration of exposure should be considered when using the CD-1 mouse to model the effect of early life exposure of infants to ISO.
Resumo:
Understanding the relationship between genetic diseases and the genes associated with them is an important problem regarding human health. The vast amount of data created from a large number of high-throughput experiments performed in the last few years has resulted in an unprecedented growth in computational methods to tackle the disease gene association problem. Nowadays, it is clear that a genetic disease is not a consequence of a defect in a single gene. Instead, the disease phenotype is a reflection of various genetic components interacting in a complex network. In fact, genetic diseases, like any other phenotype, occur as a result of various genes working in sync with each other in a single or several biological module(s). Using a genetic algorithm, our method tries to evolve communities containing the set of potential disease genes likely to be involved in a given genetic disease. Having a set of known disease genes, we first obtain a protein-protein interaction (PPI) network containing all the known disease genes. All the other genes inside the procured PPI network are then considered as candidate disease genes as they lie in the vicinity of the known disease genes in the network. Our method attempts to find communities of potential disease genes strongly working with one another and with the set of known disease genes. As a proof of concept, we tested our approach on 16 breast cancer genes and 15 Parkinson's Disease genes. We obtained comparable or better results than CIPHER, ENDEAVOUR and GPEC, three of the most reliable and frequently used disease-gene ranking frameworks.