969 resultados para Geostatistics modeling techniques
Resumo:
This paper describes strategies and techniques to perform modeling and automatic mesh generation of the aorta artery and its tunics (adventitia, media and intima walls), using open source codes. The models were constructed in the Blender package and Python scripts were used to export the data necessary for the mesh generation in TetGen. The strategies proposed are able to provide meshes of complicated and irregular volumes, with a large number of mesh elements involved (12,000,000 tetrahedrons approximately). These meshes can be used to perform computational simulations by Finite Element Method (FEM). © Published under licence by IOP Publishing Ltd.
Resumo:
The Finite Element Method is a well-known technique, being extensively applied in different areas. Studies using the Finite Element Method (FEM) are targeted to improve cardiac ablation procedures. For such simulations, the finite element meshes should consider the size and histological features of the target structures. However, it is possible to verify that some methods or tools used to generate meshes of human body structures are still limited, due to nondetailed models, nontrivial preprocessing, or mainly limitation in the use condition. In this paper, alternatives are demonstrated to solid modeling and automatic generation of highly refined tetrahedral meshes, with quality compatible with other studies focused on mesh generation. The innovations presented here are strategies to integrate Open Source Software (OSS). The chosen techniques and strategies are presented and discussed, considering cardiac structures as a first application context. © 2013 E. Pavarino et al.
Resumo:
Mammalian natriuretic peptides (NPs) have been extensively investigated for use as therapeutic agents in the treatment of cardiovascular diseases. Here, we describe the isolation, sequencing and tridimensional homology modeling of the first C-type natriuretic peptide isolated from scorpion venom. In addition, its effects on the renal function of rats and on the mRNA expression of natriuretic peptide receptors in the kidneys are delineated. Fractionation of Tityusserrulatus venom using chromatographic techniques yielded a peptide with a molecular mass of 2190.64Da, which exhibited the pattern of disulfide bridges that is characteristic of a C-type NP (TsNP, T. serrulatus Natriuretic Peptide). In the isolated perfused rat kidney assay, treatment with two concentrations of TsNP (0.03 and 0.1μg/mL) increased the perfusion pressure, glomerular filtration rate and urinary flow. After 60min of treatment at both concentrations, the percentages of sodium, potassium and chloride transport were decreased, and the urinary cGMP concentration was elevated. Natriuretic peptide receptor-A (NPR-A) mRNA expression was down regulated in the kidneys treated with both concentrations of TsNP, whereas NPR-B, NPR-C and CG-C mRNAs were up regulated at the 0.1μg/mL concentration. In conclusion, this work describes the isolation and modeling of the first natriuretic peptide isolated from scorpion venom. In addition, examinations of the renal actions of TsNP indicate that its effects may be related to the activation of NPR-B, NPR-C and GC-C. © 2013 Elsevier Ltd.
Resumo:
Pós-graduação em Ciências Cartográficas - FCT
Resumo:
Many models for unsaturated soil have been developed in the last years, accompanying the development of experimental techniques to deal with such soils. The benchmark of the models for unsaturated soil can be assigned to the Barcelona Basic Model (BBM) now incorporated in some codes such as the CODE_BRIGHT. Most of those models were validated considering limited laboratory test results and not much validation is available considering real field problems. This paper presents modeling results of field plate load tests performed under known suction on a lateritic unsaturated soil. The required input data were taken from laboratory tests performed under suction control. The modeling nicely reproduces field tests allowing appreciating the influence of soil suction on the stress-settlement curve. In addition, wetting induced or collapse settlements were calculated from field tests and were nicely duplicated by the numerical analysis performed.
Resumo:
The adoption of ERPs systems by small and middle-sized companies may not be possible due to their cost. At the same time, when adapting ERP to the company's particular needs, the user keeps depending on the system's sellers due to the lack of access and knowledge of the respective code. Free and open-source software may promote advantages to the enterprises, however, for its adoption it is necessary the development of techniques and tools in order to facilitate its deployment and code maintenance. This article emphasizes the importance of defining modeling architectures and reference models for the development and maintenance of open-source ERPs, in special the ERP5 project.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The objective of this work was to evaluate extreme water table depths in a watershed, using methods for geographical spatial data analysis. Groundwater spatio-temporal dynamics was evaluated in an outcrop of the Guarani Aquifer System. Water table depths were estimated from monitoring of water levels in 23 piezometers and time series modeling available from April 2004 to April 2011. For generation of spatial scenarios, geostatistical techniques were used, which incorporated into the prediction ancillary information related to the geomorphological patterns of the watershed, using a digital elevation model. This procedure improved estimates, due to the high correlation between water levels and elevation, and aggregated physical sense to predictions. The scenarios showed differences regarding the extreme levels - too deep or too shallow ones - and can subsidize water planning, efficient water use, and sustainable water management in the watershed.
Resumo:
Molecular modeling is growing as a research tool in Chemical Engineering studies, as can be seen by a simple research on the latest publications in the field. Molecular investigations retrieve information on properties often accessible only by expensive and time-consuming experimental techniques, such as those involved in the study of radical-based chain reactions. In this work, different quantum chemical techniques were used to study phenol oxidation by hydroxyl radicals in Advanced Oxidation Processes used for wastewater treatment. The results obtained by applying a DFT-based model showed good agreement with experimental values available, as well as qualitative insights into the mechanism of the overall reaction chain. Solvation models were also tried, but were found to be limited for this reaction system within the considered theoretical level without further parameterization.
Resumo:
Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.
Resumo:
Abstract Background To understand the molecular mechanisms underlying important biological processes, a detailed description of the gene products networks involved is required. In order to define and understand such molecular networks, some statistical methods are proposed in the literature to estimate gene regulatory networks from time-series microarray data. However, several problems still need to be overcome. Firstly, information flow need to be inferred, in addition to the correlation between genes. Secondly, we usually try to identify large networks from a large number of genes (parameters) originating from a smaller number of microarray experiments (samples). Due to this situation, which is rather frequent in Bioinformatics, it is difficult to perform statistical tests using methods that model large gene-gene networks. In addition, most of the models are based on dimension reduction using clustering techniques, therefore, the resulting network is not a gene-gene network but a module-module network. Here, we present the Sparse Vector Autoregressive model as a solution to these problems. Results We have applied the Sparse Vector Autoregressive model to estimate gene regulatory networks based on gene expression profiles obtained from time-series microarray experiments. Through extensive simulations, by applying the SVAR method to artificial regulatory networks, we show that SVAR can infer true positive edges even under conditions in which the number of samples is smaller than the number of genes. Moreover, it is possible to control for false positives, a significant advantage when compared to other methods described in the literature, which are based on ranks or score functions. By applying SVAR to actual HeLa cell cycle gene expression data, we were able to identify well known transcription factor targets. Conclusion The proposed SVAR method is able to model gene regulatory networks in frequent situations in which the number of samples is lower than the number of genes, making it possible to naturally infer partial Granger causalities without any a priori information. In addition, we present a statistical test to control the false discovery rate, which was not previously possible using other gene regulatory network models.
Resumo:
A systematic approach to model nonlinear systems using norm-bounded linear differential inclusions (NLDIs) is proposed in this paper. The resulting NLDI model is suitable for the application of linear control design techniques and, therefore, it is possible to fulfill certain specifications for the underlying nonlinear system, within an operating region of interest in the state-space, using a linear controller designed for this NLDI model. Hence, a procedure to design a dynamic output feedback controller for the NLDI model is also proposed in this paper. One of the main contributions of the proposed modeling and control approach is the use of the mean-value theorem to represent the nonlinear system by a linear parameter-varying model, which is then mapped into a polytopic linear differential inclusion (PLDI) within the region of interest. To avoid the combinatorial problem that is inherent of polytopic models for medium- and large-sized systems, the PLDI is transformed into an NLDI, and the whole process is carried out ensuring that all trajectories of the underlying nonlinear system are also trajectories of the resulting NLDI within the operating region of interest. Furthermore, it is also possible to choose a particular structure for the NLDI parameters to reduce the conservatism in the representation of the nonlinear system by the NLDI model, and this feature is also one important contribution of this paper. Once the NLDI representation of the nonlinear system is obtained, the paper proposes the application of a linear control design method to this representation. The design is based on quadratic Lyapunov functions and formulated as search problem over a set of bilinear matrix inequalities (BMIs), which is solved using a two-step separation procedure that maps the BMIs into a set of corresponding linear matrix inequalities. Two numerical examples are given to demonstrate the effectiveness of the proposed approach.
Resumo:
Molecular modeling is growing as a research tool in Chemical Engineering studies, as can be seen by a simple research on the latest publications in the field. Molecular investigations retrieve information on properties often accessible only by expensive and time-consuming experimental techniques, such as those involved in the study of radical-based chain reactions. In this work, different quantum chemical techniques were used to study phenol oxidation by hydroxyl radicals in Advanced Oxidation Processes used for wastewater treatment. The results obtained by applying a DFT-based model showed good agreement with experimental values available, as well as qualitative insights into the mechanism of the overall reaction chain. Solvation models were also tried, but were found to be limited for this reaction system within the considered theoretical level without further parameterization.
Resumo:
Micelles composed of amphiphilic copolymers linked to a radioactive element are used in nuclear medicine predominantly as a diagnostic application. A relevant advantage of polymeric micelles in aqueous solution is their resulting particle size, which can vary from 10 to 100 nm in diameter. In this review, polymeric micelles labeled with radioisotopes including technetium (99mTc) and indium (111In), and their clinical applications for several diagnostic techniques, such as single photon emission computed tomography (SPECT), gamma-scintigraphy, and nuclear magnetic resonance (NMR), were discussed. Also, micelle use primarily for the diagnosis of lymphatic ducts and sentinel lymph nodes received special attention. Notably, the employment of these diagnostic techniques can be considered a significant tool for functionally exploring body systems as well as investigating molecular pathways involved in the disease process. The use of molecular modeling methodologies and computer-aided drug design strategies can also yield valuable information for the rational design and development of novel radiopharmaceuticals.
Resumo:
The discovery and development of a new drug are time-consuming, difficult and expensive. This complex process has evolved from classical methods into an integration of modern technologies and innovative strategies addressed to the design of new chemical entities to treat a variety of diseases. The development of new drug candidates is often limited by initial compounds lacking reasonable chemical and biological properties for further lead optimization. Huge libraries of compounds are frequently selected for biological screening using a variety of techniques and standard models to assess potency, affinity and selectivity. In this context, it is very important to study the pharmacokinetic profile of the compounds under investigation. Recent advances have been made in the collection of data and the development of models to assess and predict pharmacokinetic properties (ADME - absorption, distribution, metabolism and excretion) of bioactive compounds in the early stages of drug discovery projects. This paper provides a brief perspective on the evolution of in silico ADME tools, addressing challenges, limitations, and opportunities in medicinal chemistry.