915 resultados para models of computation
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
OBJETIVOS: avaliar a qualidade do cuidado pré-natal desenvolvido na atenção primária, comparando os modelos tradicional e Estratégia Saúde da Família. MÉTODO: estudo de avaliação de serviço, pautado nas políticas públicas de saúde. Os dados foram obtidos por meio de entrevista com gerentes, observação nas unidades de saúde e análise de prontuários de gestantes, selecionados aleatoriamente. Diferenças nos indicadores de estrutura e processo foram avaliadas pelo teste qui-quadrado, adotando-se p<0,05 como nível crítico, cálculo dos odds ratio e intervalos de confiança de 95%. RESULTADOS: foram evidenciadas estruturas semelhantes em ambos os modelos de atenção. Indicadores-síntese de processo, criados neste estudo, e os indicados pelas políticas públicas apontaram situação mais favorável nas Unidades de Saúde da Família. Para o conjunto de atividades preconizadas para o pré-natal, o desempenho foi deficiente em ambos os modelos, embora pouco melhor nas Unidades de Saúde da Família. CONCLUSÃO: os resultados indicam a necessidade de ações para melhoria da atenção pré-natal nos dois modelos de atenção básica no município avaliado.
Resumo:
Purpose - The purpose of this paper is to develop an efficient numerical algorithm for the self-consistent solution of Schrodinger and Poisson equations in one-dimensional systems. The goal is to compute the charge-control and capacitance-voltage characteristics of quantum wire transistors. Design/methodology/approach - The paper presents a numerical formulation employing a non-uniform finite difference discretization scheme, in which the wavefunctions and electronic energy levels are obtained by solving the Schrodinger equation through the split-operator method while a relaxation method in the FTCS scheme ("Forward Time Centered Space") is used to solve the two-dimensional Poisson equation. Findings - The numerical model is validated by taking previously published results as a benchmark and then applying them to yield the charge-control characteristics and the capacitance-voltage relationship for a split-gate quantum wire device. Originality/value - The paper helps to fulfill the need for C-V models of quantum wire device. To do so, the authors implemented a straightforward calculation method for the two-dimensional electronic carrier density n(x,y). The formulation reduces the computational procedure to a much simpler problem, similar to the one-dimensional quantization case, significantly diminishing running time.
Resumo:
The peroxisome proliferator-activated receptors (PPARs) regulate genes involved in lipid and carbohydrate metabolism, and are targets of drugs approved for human use. Whereas the crystallographic structure of the complex of full length PPAR gamma and RXR alpha is known, structural alterations induced by heterodimer formation and DNA contacts are not well understood. Herein, we report a small-angle X-ray scattering analysis of the oligomeric state of hPPAR gamma alone and in the presence of retinoid X receptor (RXR). The results reveal that, in contrast with other studied nuclear receptors, which predominantly form dimers in solution, hPPAR gamma remains in the monomeric form by itself but forms heterodimers with hRXR alpha. The low-resolution models of hPPAR gamma/RXR alpha complexes predict significant changes in opening angle between heterodimerization partners (LBD) and extended and asymmetric shape of the dimer (LBD-DBD) as compared with X-ray structure of the full-length receptor bound to DNA. These differences between our SAXS models and the high-resolution crystallographic structure might suggest that there are different conformations of functional heterodimer complex in solution. Accordingly, hydrogen/deuterium exchange experiments reveal that the heterodimer binding to DNA promotes more compact and less solvent-accessible conformation of the receptor complex.
Resumo:
Background. The surgical treatment of dysfunctional hips is a severe condition for the patient and a costly therapy for the public health. Hip resurfacing techniques seem to hold the promise of various advantages over traditional THR, with particular attention to young and active patients. Although the lesson provided in the past by many branches of engineering is that success in designing competitive products can be achieved only by predicting the possible scenario of failure, to date the understanding of the implant quality is poorly pre-clinically addressed. Thus revision is the only delayed and reliable end point for assessment. The aim of the present work was to model the musculoskeletal system so as to develop a protocol for predicting failure of hip resurfacing prosthesis. Methods. Preliminary studies validated the technique for the generation of subject specific finite element (FE) models of long bones from Computed Thomography data. The proposed protocol consisted in the numerical analysis of the prosthesis biomechanics by deterministic and statistic studies so as to assess the risk of biomechanical failure on the different operative conditions the implant might face in a population of interest during various activities of daily living. Physiological conditions were defined including the variability of the anatomy, bone densitometry, surgery uncertainties and published boundary conditions at the hip. The protocol was tested by analysing a successful design on the market and a new prototype of a resurfacing prosthesis. Results. The intrinsic accuracy of models on bone stress predictions (RMSE < 10%) was aligned to the current state of the art in this field. The accuracy of prediction on the bone-prosthesis contact mechanics was also excellent (< 0.001 mm). The sensitivity of models prediction to uncertainties on modelling parameter was found below 8.4%. The analysis of the successful design resulted in a very good agreement with published retrospective studies. The geometry optimisation of the new prototype lead to a final design with a low risk of failure. The statistical analysis confirmed the minimal risk of the optimised design over the entire population of interest. The performances of the optimised design showed a significant improvement with respect to the first prototype (+35%). Limitations. On the authors opinion the major limitation of this study is on boundary conditions. The muscular forces and the hip joint reaction were derived from the few data available in the literature, which can be considered significant but hardly representative of the entire variability of boundary conditions the implant might face over the patients population. This moved the focus of the research on modelling the musculoskeletal system; the ongoing activity is to develop subject-specific musculoskeletal models of the lower limb from medical images. Conclusions. The developed protocol was able to accurately predict known clinical outcomes when applied to a well-established device and, to support the design optimisation phase providing important information on critical characteristics of the patients when applied to a new prosthesis. The presented approach does have a relevant generality that would allow the extension of the protocol to a large set of orthopaedic scenarios with minor changes. Hence, a failure mode analysis criterion can be considered a suitable tool in developing new orthopaedic devices.
Resumo:
Being basic ingredients of numerous daily-life products with significant industrial importance as well as basic building blocks for biomaterials, charged hydrogels continue to pose a series of unanswered challenges for scientists even after decades of practical applications and intensive research efforts. Despite a rather simple internal structure it is mainly the unique combination of short- and long-range forces which render scientific investigations of their characteristic properties to be quite difficult. Hence early on computer simulations were used to link analytical theory and empirical experiments, bridging the gap between the simplifying assumptions of the models and the complexity of real world measurements. Due to the immense numerical effort, even for high performance supercomputers, system sizes and time scales were rather restricted until recently, whereas it only now has become possible to also simulate a network of charged macromolecules. This is the topic of the presented thesis which investigates one of the fundamental and at the same time highly fascinating phenomenon of polymer research: The swelling behaviour of polyelectrolyte networks. For this an extensible simulation package for the research on soft matter systems, ESPResSo for short, was created which puts a particular emphasis on mesoscopic bead-spring-models of complex systems. Highly efficient algorithms and a consistent parallelization reduced the necessary computation time for solving equations of motion even in case of long-ranged electrostatics and large number of particles, allowing to tackle even expensive calculations and applications. Nevertheless, the program has a modular and simple structure, enabling a continuous process of adding new potentials, interactions, degrees of freedom, ensembles, and integrators, while staying easily accessible for newcomers due to a Tcl-script steering level controlling the C-implemented simulation core. Numerous analysis routines provide means to investigate system properties and observables on-the-fly. Even though analytical theories agreed on the modeling of networks in the past years, our numerical MD-simulations show that even in case of simple model systems fundamental theoretical assumptions no longer apply except for a small parameter regime, prohibiting correct predictions of observables. Applying a "microscopic" analysis of the isolated contributions of individual system components, one of the particular strengths of computer simulations, it was then possible to describe the behaviour of charged polymer networks at swelling equilibrium in good solvent and close to the Theta-point by introducing appropriate model modifications. This became possible by enhancing known simple scaling arguments with components deemed crucial in our detailed study, through which a generalized model could be constructed. Herewith an agreement of the final system volume of swollen polyelectrolyte gels with results of computer simulations could be shown successfully over the entire investigated range of parameters, for different network sizes, charge fractions, and interaction strengths. In addition, the "cell under tension" was presented as a self-regulating approach for predicting the amount of swelling based on the used system parameters only. Without the need for measured observables as input, minimizing the free energy alone already allows to determine the the equilibrium behaviour. In poor solvent the shape of the network chains changes considerably, as now their hydrophobicity counteracts the repulsion of like-wise charged monomers and pursues collapsing the polyelectrolytes. Depending on the chosen parameters a fragile balance emerges, giving rise to fascinating geometrical structures such as the so-called pear-necklaces. This behaviour, known from single chain polyelectrolytes under similar environmental conditions and also theoretically predicted, could be detected for the first time for networks as well. An analysis of the total structure factors confirmed first evidences for the existence of such structures found in experimental results.
Resumo:
This thesis is mainly devoted to show how EEG data and related phenomena can be reproduced and analyzed using mathematical models of neural masses (NMM). The aim is to describe some of these phenomena, to show in which ways the design of the models architecture is influenced by such phenomena, point out the difficulties of tuning the dozens of parameters of the models in order to reproduce the activity recorded with EEG systems during different kinds of experiments, and suggest some strategies to cope with these problems. In particular the chapters are organized as follows: chapter I gives a brief overview of the aims and issues addressed in the thesis; in chapter II the main characteristics of the cortical column, of the EEG signal and of the neural mass models will be presented, in order to show the relationships that hold between these entities; chapter III describes a study in which a NMM from the literature has been used to assess brain connectivity changes in tetraplegic patients; in chapter IV a modified version of the NMM is presented, which has been developed to overcomes some of the previous version’s intrinsic limitations; chapter V describes a study in which the new NMM has been used to reproduce the electrical activity evoked in the cortex by the transcranial magnetic stimulation (TMS); chapter VI presents some preliminary results obtained in the simulation of the neural rhythms associated with memory recall; finally, some general conclusions are drawn in chapter VII.
Resumo:
In this thesis a mathematical model was derived that describes the charge and energy transport in semiconductor devices like transistors. Moreover, numerical simulations of these physical processes are performed. In order to accomplish this, methods of theoretical physics, functional analysis, numerical mathematics and computer programming are applied. After an introduction to the status quo of semiconductor device simulation methods and a brief review of historical facts up to now, the attention is shifted to the construction of a model, which serves as the basis of the subsequent derivations in the thesis. Thereby the starting point is an important equation of the theory of dilute gases. From this equation the model equations are derived and specified by means of a series expansion method. This is done in a multi-stage derivation process, which is mainly taken from a scientific paper and which does not constitute the focus of this thesis. In the following phase we specify the mathematical setting and make precise the model assumptions. Thereby we make use of methods of functional analysis. Since the equations we deal with are coupled, we are concerned with a nonstandard problem. In contrary, the theory of scalar elliptic equations is established meanwhile. Subsequently, we are preoccupied with the numerical discretization of the equations. A special finite-element method is used for the discretization. This special approach has to be done in order to make the numerical results appropriate for practical application. By a series of transformations from the discrete model we derive a system of algebraic equations that are eligible for numerical evaluation. Using self-made computer programs we solve the equations to get approximate solutions. These programs are based on new and specialized iteration procedures that are developed and thoroughly tested within the frame of this research work. Due to their importance and their novel status, they are explained and demonstrated in detail. We compare these new iterations with a standard method that is complemented by a feature to fit in the current context. A further innovation is the computation of solutions in three-dimensional domains, which are still rare. Special attention is paid to applicability of the 3D simulation tools. The programs are designed to have justifiable working complexity. The simulation results of some models of contemporary semiconductor devices are shown and detailed comments on the results are given. Eventually, we make a prospect on future development and enhancements of the models and of the algorithms that we used.
Resumo:
This master’s thesis describes the research done at the Medical Technology Laboratory (LTM) of the Rizzoli Orthopedic Institute (IOR, Bologna, Italy), which focused on the characterization of the elastic properties of the trabecular bone tissue, starting from october 2012 to present. The approach uses computed microtomography to characterize the architecture of trabecular bone specimens. With the information obtained from the scanner, specimen-specific models of trabecular bone are generated for the solution with the Finite Element Method (FEM). Along with the FEM modelling, mechanical tests are performed over the same reconstructed bone portions. From the linear-elastic stage of mechanical tests presented by experimental results, it is possible to estimate the mechanical properties of the trabecular bone tissue. After a brief introduction on the biomechanics of the trabecular bone (chapter 1) and on the characterization of the mechanics of its tissue using FEM models (chapter 2), the reliability analysis of an experimental procedure is explained (chapter 3), based on the high-scalable numerical solver ParFE. In chapter 4, the sensitivity analyses on two different parameters for micro-FEM model’s reconstruction are presented. Once the reliability of the modeling strategy has been shown, a recent layout for experimental test, developed in LTM, is presented (chapter 5). Moreover, the results of the application of the new layout are discussed, with a stress on the difficulties connected to it and observed during the tests. Finally, a prototype experimental layout for the measure of deformations in trabecular bone specimens is presented (chapter 6). This procedure is based on the Digital Image Correlation method and is currently under development in LTM.
Resumo:
The aim of the present thesis was to investigate the influence of lower-limb joint models on musculoskeletal model predictions during gait. We started our analysis by using a baseline model, i.e., the state-of-the-art lower-limb model (spherical joint at the hip and hinge joints at the knee and ankle) created from MRI of a healthy subject in the Medical Technology Laboratory of the Rizzoli Orthopaedic Institute. We varied the models of knee and ankle joints, including: knee- and ankle joints with mean instantaneous axis of rotation, universal joint at the ankle, scaled-generic-derived planar knee, subject-specific planar knee model, subject-specific planar ankle model, spherical knee, spherical ankle. The joint model combinations corresponding to 10 musculoskeletal models were implemented into a typical inverse dynamics problem, including inverse kinematics, inverse dynamics, static optimization and joint reaction analysis algorithms solved using the OpenSim software to calculate joint angles, joint moments, muscle forces and activations, joint reaction forces during 5 walking trials. The predicted muscle activations were qualitatively compared to experimental EMG, to evaluate the accuracy of model predictions. Planar joint at the knee, universal joint at the ankle and spherical joints at the knee and at the ankle produced appreciable variations in model predictions during gait trials. The planar knee joint model reduced the discrepancy between the predicted activation of the Rectus Femoris and the EMG (with respect to the baseline model), and the reduced peak knee reaction force was considered more accurate. The use of the universal joint, with the introduction of the subtalar joint, worsened the muscle activation agreement with the EMG, and increased ankle and knee reaction forces were predicted. The spherical joints, in particular at the knee, worsened the muscle activation agreement with the EMG. A substantial increase of joint reaction forces at all joints was predicted despite of the good agreement in joint kinematics with those of the baseline model. The introduction of the universal joint had a negative effect on the model predictions. The cause of this discrepancy is likely to be found in the definition of the subtalar joint and thus, in the particular subject’s anthropometry, used to create the model and define the joint pose. We concluded that the implementation of complex joint models do not have marked effects on the joint reaction forces during gait. Computed results were similar in magnitude and in pattern to those reported in literature. Nonetheless, the introduction of planar joint model at the knee had positive effect upon the predictions, while the use of spherical joint at the knee and/or at the ankle is absolutely unadvisable, because it predicted unrealistic joint reaction forces.
Resumo:
Over the last decades, considerable efforts have been undertaken in the development of animal models mimicking the pathogenesis of allergic diseases occurring in humans. The mouse has rapidly emerged as the animal model of choice, due to considerations of handling and costs and, importantly, due to the availability of a large and increasing arsenal of genetically modified mouse strains and molecular tools facilitating the analysis of complex disease models. Here, we review latest developments in allergy research that have arisen from in vivo experimentation in the mouse, with a focus on models of food allergy and allergic asthma, which constitute major health problems with increasing incidence in industrialized countries. We highlight recent novel findings and controversies in the field, most of which were obtained through the use of gene-deficient or germ-free mice, and discuss new potential therapeutic approaches that have emerged from animal studies and that aim at attenuating allergic reactions in human patients.
Resumo:
DNA sequence copy number has been shown to be associated with cancer development and progression. Array-based Comparative Genomic Hybridization (aCGH) is a recent development that seeks to identify the copy number ratio at large numbers of markers across the genome. Due to experimental and biological variations across chromosomes and across hybridizations, current methods are limited to analyses of single chromosomes. We propose a more powerful approach that borrows strength across chromosomes and across hybridizations. We assume a Gaussian mixture model, with a hidden Markov dependence structure, and with random effects to allow for intertumoral variation, as well as intratumoral clonal variation. For ease of computation, we base estimation on a pseudolikelihood function. The method produces quantitative assessments of the likelihood of genetic alterations at each clone, along with a graphical display for simple visual interpretation. We assess the characteristics of the method through simulation studies and through analysis of a brain tumor aCGH data set. We show that the pseudolikelihood approach is superior to existing methods both in detecting small regions of copy number alteration and in accurately classifying regions of change when intratumoral clonal variation is present.