6 resultados para scalability
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
We present a family of networks whose local interconnection topologies are generated by the root vectors of a semi-simple complex Lie algebra. Cartan classification theorem of those algebras ensures those families of interconnection topologies to be exhaustive. The global arrangement of the network is defined in terms of integer or half-integer weight lattices. The mesh or torus topologies that network millions of processing cores, such as those in the IBM BlueGene series, are the simplest member of that category. The symmetries of the root systems of an algebra, manifested by their Weyl group, lends great convenience for the design and analysis of hardware architecture, algorithms and programs.
Resumo:
The floating-body-RAM sense margin and retention-time dependence on the gate length is investigated in UTBOX devices using BJT programming combined with a positive back bias (so-called V th feedback). It is shown that the sense margin and the retention time can be kept constant versus the gate length by using a positive back bias. Nevertheless, below a critical L, there is no room for optimization, and the memory performances suddenly drop. The mechanism behind this degradation is attributed to GIDL current amplification by the lateral bipolar transistor with a narrow base. The gate length can be further scaled using underlap junctions.
Resumo:
Abstract Background Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. Results This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. Conclusions This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.
Resumo:
Abstract Background Hemophilia A is a bleeding disorder caused by deficiency in coagulation factor VIII. Recombinant factor VIII (rFVIII) is an alternative to plasma-derived FVIII for the treatment of hemophilia A. However, commercial manufacturing of rFVIII products is inefficient and costly and is associated to high prices and product shortage, even in economically privileged countries. This situation may be solved by adopting more efficient production methods. Here, we evaluated the potential of transient transfection in producing rFVIII in serum-free suspension HEK 293 cell cultures and investigated the effects of different DNA concentration (0.4, 0.6 and 0.8 μg/106 cells) and repeated transfections done at 34° and 37°C. Results We observed a decrease in cell growth when high DNA concentrations were used, but no significant differences in transfection efficiency and in the biological activity of the rFVIII were noticed. The best condition for rFVIII production was obtained with repeated transfections at 34°C using 0.4 μg DNA/106 cells through which almost 50 IU of active rFVIII was produced six days post-transfection. Conclusion Serum-free suspension transient transfection is thus a viable option for high-yield-rFVIII production. Work is in progress to further optimize the process and validate its scalability.
Resumo:
In this present work we present a methodology that aims to apply the many-body expansion to decrease the computational cost of ab initio molecular dynamics, keeping acceptable accuracy on the results. We implemented this methodology in a program which we called ManBo. In the many-body expansion approach, we partitioned the total energy E of the system in contributions of one body, two bodies, three bodies, etc., until the contribution of the Nth body [1-3]: E = E1 + E2 + E3 + …EN. The E1 term is the sum of the internal energy of the molecules; the term E2 is the energy due to interaction between all pairs of molecules; E3 is the energy due to interaction between all trios of molecules; and so on. In Manbo we chose to truncate the expansion in the contribution of two or three bodies, both for the calculation of the energy and for the calculation of the atomic forces. In order to partially include the many-body interactions neglected when we truncate the expansion, we can include an electrostatic embedding in the electronic structure calculations, instead of considering the monomers, pairs and trios as isolated molecules in space. In simulations we made we chose to simulate water molecules, and use the Gaussian 09 as external program to calculate the atomic forces and energy of the system, as well as reference program for analyzing the accuracy of the results obtained with the ManBo. The results show that the use of the many-body expansion seems to be an interesting approach for reducing the still prohibitive computational cost of ab initio molecular dynamics. The errors introduced on atomic forces in applying such methodology are very small. The inclusion of an embedding electrostatic seems to be a good solution for improving the results with only a small increase in simulation time. As we increase the level of calculation, the simulation time of ManBo tends to largely decrease in relation to a conventional BOMD simulation of Gaussian, due to better scalability of the methodology presented. References [1] E. E. Dahlke and D. G. Truhlar; J. Chem. Theory Comput., 3, 46 (2007). [2] E. E. Dahlke and D. G. Truhlar; J. Chem. Theory Comput., 4, 1 (2008). [3] R. Rivelino, P. Chaudhuri and S. Canuto; J. Chem. Phys., 118, 10593 (2003).
Resumo:
Due to the growing interest in social networks, link prediction has received significant attention. Link prediction is mostly based on graph-based features, with some recent approaches focusing on domain semantics. We propose algorithms for link prediction that use a probabilistic ontology to enhance the analysis of the domain and the unavoidable uncertainty in the task (the ontology is specified in the probabilistic description logic crALC). The scalability of the approach is investigated, through a combination of semantic assumptions and graph-based features. We evaluate empirically our proposal, and compare it with standard solutions in the literature.