11 resultados para Geostatistics modeling techniques
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
Purpose - The aim of this study is to investigate whether knowledge management (KM) contributes to the development of strategic orientation and to enhance innovativeness, and whether these three factors contribute to improve business performance. Design/methodology/approach - A sample of 241 Brazilian companies was surveyed, using Web-based questionnaires with 54 questions, using ten-point scales to measure the degree of agreement on each item of each construct. Structural equation modeling techniques were applied for model assessment and analysis of the relationships among constructs. Exploratory factor analysis, confirmatory factor analysis, and path analysis using the technique of structural equation modeling were applied to the data. Findings - Effective KM contributes positively to strategic orientation. Although there is no significant direct effect of KM on innovativeness, the relationship is significant when mediated by strategic orientation. Similarly effective KM has no direct effect on business performance, but this relationship becomes statistically significant when mediated by strategic orientation and innovativeness. Research limitations/implications - The findings indicate that KM permeates all relationships among the constructs, corroborating the argument that knowledge is an essential organizational resource that leverages all value-creating activities. The results indicate that both KM and innovativeness produce significant impacts on performance when they are aligned with a strategic orientation that enables the organization to anticipate and respond to changing market conditions. Originality/value - There is a substantial body of research on several types of relationships involving KM, strategic orientation, innovativeness and performance. This study offers an original contribution by analyzing all of those constructs simultaneously using established scales so that comparative studies are possible.
Resumo:
Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids. In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism. However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties. By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community. Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data.
Resumo:
Statistical methods have been widely employed to assess the capabilities of credit scoring classification models in order to reduce the risk of wrong decisions when granting credit facilities to clients. The predictive quality of a classification model can be evaluated based on measures such as sensitivity, specificity, predictive values, accuracy, correlation coefficients and information theoretical measures, such as relative entropy and mutual information. In this paper we analyze the performance of a naive logistic regression model (Hosmer & Lemeshow, 1989) and a logistic regression with state-dependent sample selection model (Cramer, 2004) applied to simulated data. Also, as a case study, the methodology is illustrated on a data set extracted from a Brazilian bank portfolio. Our simulation results so far revealed that there is no statistically significant difference in terms of predictive capacity between the naive logistic regression models and the logistic regression with state-dependent sample selection models. However, there is strong difference between the distributions of the estimated default probabilities from these two statistical modeling techniques, with the naive logistic regression models always underestimating such probabilities, particularly in the presence of balanced samples. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
The objective of this work was to evaluate extreme water table depths in a watershed, using methods for geographical spatial data analysis. Groundwater spatio-temporal dynamics was evaluated in an outcrop of the Guarani Aquifer System. Water table depths were estimated from monitoring of water levels in 23 piezometers and time series modeling available from April 2004 to April 2011. For generation of spatial scenarios, geostatistical techniques were used, which incorporated into the prediction ancillary information related to the geomorphological patterns of the watershed, using a digital elevation model. This procedure improved estimates, due to the high correlation between water levels and elevation, and aggregated physical sense to predictions. The scenarios showed differences regarding the extreme levels - too deep or too shallow ones - and can subsidize water planning, efficient water use, and sustainable water management in the watershed.
Resumo:
Molecular modeling is growing as a research tool in Chemical Engineering studies, as can be seen by a simple research on the latest publications in the field. Molecular investigations retrieve information on properties often accessible only by expensive and time-consuming experimental techniques, such as those involved in the study of radical-based chain reactions. In this work, different quantum chemical techniques were used to study phenol oxidation by hydroxyl radicals in Advanced Oxidation Processes used for wastewater treatment. The results obtained by applying a DFT-based model showed good agreement with experimental values available, as well as qualitative insights into the mechanism of the overall reaction chain. Solvation models were also tried, but were found to be limited for this reaction system within the considered theoretical level without further parameterization.
Resumo:
Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.
Resumo:
Abstract Background To understand the molecular mechanisms underlying important biological processes, a detailed description of the gene products networks involved is required. In order to define and understand such molecular networks, some statistical methods are proposed in the literature to estimate gene regulatory networks from time-series microarray data. However, several problems still need to be overcome. Firstly, information flow need to be inferred, in addition to the correlation between genes. Secondly, we usually try to identify large networks from a large number of genes (parameters) originating from a smaller number of microarray experiments (samples). Due to this situation, which is rather frequent in Bioinformatics, it is difficult to perform statistical tests using methods that model large gene-gene networks. In addition, most of the models are based on dimension reduction using clustering techniques, therefore, the resulting network is not a gene-gene network but a module-module network. Here, we present the Sparse Vector Autoregressive model as a solution to these problems. Results We have applied the Sparse Vector Autoregressive model to estimate gene regulatory networks based on gene expression profiles obtained from time-series microarray experiments. Through extensive simulations, by applying the SVAR method to artificial regulatory networks, we show that SVAR can infer true positive edges even under conditions in which the number of samples is smaller than the number of genes. Moreover, it is possible to control for false positives, a significant advantage when compared to other methods described in the literature, which are based on ranks or score functions. By applying SVAR to actual HeLa cell cycle gene expression data, we were able to identify well known transcription factor targets. Conclusion The proposed SVAR method is able to model gene regulatory networks in frequent situations in which the number of samples is lower than the number of genes, making it possible to naturally infer partial Granger causalities without any a priori information. In addition, we present a statistical test to control the false discovery rate, which was not previously possible using other gene regulatory network models.
Resumo:
A systematic approach to model nonlinear systems using norm-bounded linear differential inclusions (NLDIs) is proposed in this paper. The resulting NLDI model is suitable for the application of linear control design techniques and, therefore, it is possible to fulfill certain specifications for the underlying nonlinear system, within an operating region of interest in the state-space, using a linear controller designed for this NLDI model. Hence, a procedure to design a dynamic output feedback controller for the NLDI model is also proposed in this paper. One of the main contributions of the proposed modeling and control approach is the use of the mean-value theorem to represent the nonlinear system by a linear parameter-varying model, which is then mapped into a polytopic linear differential inclusion (PLDI) within the region of interest. To avoid the combinatorial problem that is inherent of polytopic models for medium- and large-sized systems, the PLDI is transformed into an NLDI, and the whole process is carried out ensuring that all trajectories of the underlying nonlinear system are also trajectories of the resulting NLDI within the operating region of interest. Furthermore, it is also possible to choose a particular structure for the NLDI parameters to reduce the conservatism in the representation of the nonlinear system by the NLDI model, and this feature is also one important contribution of this paper. Once the NLDI representation of the nonlinear system is obtained, the paper proposes the application of a linear control design method to this representation. The design is based on quadratic Lyapunov functions and formulated as search problem over a set of bilinear matrix inequalities (BMIs), which is solved using a two-step separation procedure that maps the BMIs into a set of corresponding linear matrix inequalities. Two numerical examples are given to demonstrate the effectiveness of the proposed approach.
Resumo:
Molecular modeling is growing as a research tool in Chemical Engineering studies, as can be seen by a simple research on the latest publications in the field. Molecular investigations retrieve information on properties often accessible only by expensive and time-consuming experimental techniques, such as those involved in the study of radical-based chain reactions. In this work, different quantum chemical techniques were used to study phenol oxidation by hydroxyl radicals in Advanced Oxidation Processes used for wastewater treatment. The results obtained by applying a DFT-based model showed good agreement with experimental values available, as well as qualitative insights into the mechanism of the overall reaction chain. Solvation models were also tried, but were found to be limited for this reaction system within the considered theoretical level without further parameterization.
Resumo:
Micelles composed of amphiphilic copolymers linked to a radioactive element are used in nuclear medicine predominantly as a diagnostic application. A relevant advantage of polymeric micelles in aqueous solution is their resulting particle size, which can vary from 10 to 100 nm in diameter. In this review, polymeric micelles labeled with radioisotopes including technetium (99mTc) and indium (111In), and their clinical applications for several diagnostic techniques, such as single photon emission computed tomography (SPECT), gamma-scintigraphy, and nuclear magnetic resonance (NMR), were discussed. Also, micelle use primarily for the diagnosis of lymphatic ducts and sentinel lymph nodes received special attention. Notably, the employment of these diagnostic techniques can be considered a significant tool for functionally exploring body systems as well as investigating molecular pathways involved in the disease process. The use of molecular modeling methodologies and computer-aided drug design strategies can also yield valuable information for the rational design and development of novel radiopharmaceuticals.
Resumo:
The discovery and development of a new drug are time-consuming, difficult and expensive. This complex process has evolved from classical methods into an integration of modern technologies and innovative strategies addressed to the design of new chemical entities to treat a variety of diseases. The development of new drug candidates is often limited by initial compounds lacking reasonable chemical and biological properties for further lead optimization. Huge libraries of compounds are frequently selected for biological screening using a variety of techniques and standard models to assess potency, affinity and selectivity. In this context, it is very important to study the pharmacokinetic profile of the compounds under investigation. Recent advances have been made in the collection of data and the development of models to assess and predict pharmacokinetic properties (ADME - absorption, distribution, metabolism and excretion) of bioactive compounds in the early stages of drug discovery projects. This paper provides a brief perspective on the evolution of in silico ADME tools, addressing challenges, limitations, and opportunities in medicinal chemistry.