15 resultados para Process Modeling, Collaboration, Distributed Modeling, Collaborative Technology
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
Over the last few years, Business Process Management (BPM) has achieved increasing popularity and dissemination. An analysis of the underlying assumptions of BPM shows that it pursues two apparently contradicting goals: on the one hand it aims at formalising work practices into business process models; on the other hand, it intends to confer flexibility to the organization - i.e. to maintain its ability to respond to new and unforeseen situations. This paper analyses the relationship between formalisation and flexibility in business process modelling by means of an empirical case study of a BPM project in an aircraft maintenance company. A qualitative approach is adopted based on the Actor-Network Theory. The paper offers two major contributions: (a) it illustrates the sociotechnical complexity involved in BPM initiatives; (b) it points towards a multidimensional understanding of the relation between formalization and flexibility in BPM projects.
Resumo:
The article aims to analyze the process of knowledge creation in Brazilian technology-based companies, using as a background the driving and restrictive factors found in this process. As the pillars of discussion, four main modes of knowledge conversion were used, according to the Japanese model: socialization, externalization, combination and internalization. The comparative case method through qualitative research was carried out in nine technology-based enterprises that had been incubated or have recently passed through the stage of incubation (so-called graduated companies) in the Technology Park of Sao Carlos, state of Sao Paulo, Brazil. Among the main results, the combination of knowledge was identified as more conscious and structured in graduated companies, in relation to incubated companies. In contrast, it was noted that incubated companies have an environment with greater opportunities for socialization, internalization and externalization of knowledge.
Resumo:
In this study, fluid bed granulation was applied to improve the dissolution of nimodipine and spironolactone, two very poorly water-soluble drugs. Granules were obtained with different amounts of sodium dodecyl sulfate and croscarmellose sodium and then compressed into tablets. The dissolution behavior of the tablets was studied by comparing their dissolution profiles and dissolution efficiency with those obtained from physical mixtures of the drug and excipients subjected to similar conditions. Statistical analysis of the results demonstrated that the fluid bed granulation process improves the dissolution efficiency of both nimodipine and spironolactone tablets. The addition of either the surfactant or the disintegrant employed in the study proved to have a lower impact on this improvement in dissolution than the fluid bed granulation process.
Resumo:
Forward modeling is commonly applied to gravity field data of impact structures to determine the main gravity anomaly sources. In this context, we have developed 2.5-D gravity models of the Serra da Cangalha impact structure for the purpose of investigating geological bodies/structures underneath the crater. Interpretation of the models was supported by ground magnetic data acquired along profiles, as well as by high resolution aeromagnetic data. Ground magnetic data reveal the presence of short-wavelength anomalies probably related to shallow magnetic sources that could have been emplaced during the cratering process. Aeromagnetic data show that the basement underneath the crater occurs at an average depth of about 1.9 km, whereas in the region beneath the central uplift it is raised to 0.51 km below the current surface. These depths are also supported by 2.5-D gravity models showing a gentle relief for the basement beneath the central uplift area. Geophysical data were used to provide further constraints for numeral modeling of crater formation that provided important information on the structural modification that affected the rocks underneath the crater, as well as on shock-induced modifications of target rocks. The results showed that the morphology is consistent with the current observations of the crater and that Serra da Cangalha was formed by a meteorite of approximately 1.4 km diameter striking at 12 km s-1.
Resumo:
The growth parameters (growth rate, mu and lag time, lambda) of three different strains each of Salmonella enterica and Listeria monocytogenes in minimally processed lettuce (MPL) and their changes as a function of temperature were modeled. MPL were packed under modified atmosphere (5% O-2, 15% CO2 and 80% N-2), stored at 7-30 degrees C and samples collected at different time intervals were enumerated for S. enterica and L monocytogenes. Growth curves and equations describing the relationship between mu and lambda as a function of temperature were constructed using the DMFit Excel add-in and through linear regression, respectively. The predicted growth parameters for the pathogens observed in this study were compared to ComBase, Pathogen modeling program (PMP) and data from the literature. High R-2 values (0.97 and 0.93) were observed for average growth curves of different strains of pathogens grown on MPL Secondary models of mu and lambda for both pathogens followed a linear trend with high R2 values (>0.90). Root mean square error (RMSE) showed that the models obtained are accurate and suitable for modeling the growth of S. enterica and L monocytogenes in MP lettuce. The current study provides growth models for these foodborne pathogens that can be used in microbial risk assessment. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Building facilities have become important infrastructures for modern productive plants dedicated to services. In this context, the control systems of intelligent buildings have evolved while their reliability has evidently improved. However, the occurrence of faults is inevitable in systems conceived, constructed and operated by humans. Thus, a practical alternative approach is found to be very useful to reduce the consequences of faults. Yet, only few publications address intelligent building modeling processes that take into consideration the occurrence of faults and how to manage their consequences. In the light of the foregoing, a procedure is proposed for the modeling of intelligent building control systems, considersing their functional specifications in normal operation and in the of the event of faults. The proposed procedure adopts the concepts of discrete event systems and holons, and explores Petri nets and their extensions so as to represent the structure and operation of control systems for intelligent buildings under normal and abnormal situations. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
The classic conservative approach for thermal process design can lead to over-processing, especially for laminar flow, when a significant distribution of temperature and of residence time occurs. In order to optimize quality retention, a more comprehensive model is required. A model comprising differential equations for mass and heat transfer is proposed for the simulation of the continuous thermal processing of a non-Newtonian food in a tubular system. The model takes into account the contribution from heating and cooling sections, the heat exchange with the ambient air and effective diffusion associated with non-ideal laminar flow. The study case of soursop juice processing was used to test the model. Various simulations were performed to evaluate the effect of the model assumptions. An expressive difference in the predicted lethality was observed between the classic approach and the proposed model. The main advantage of the model is its flexibility to represent different aspects with a small computational time, making it suitable for process evaluation and design. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Transplantation brings hope for many patients. A multidisciplinary approach on this field aims at creating biologically functional tissues to be used as implants and prostheses. The freeze-drying process allows the fundamental properties of these materials to be preserved, making future manipulation and storage easier. Optimizing a freeze-drying cycle is of great importance since it aims at reducing process costs while increasing product quality of this time-and-energy-consuming process. Mathematical modeling comes as a tool to help a better understanding of the process variables behavior and consequently it helps optimization studies. Freeze-drying microscopy is a technique usually applied to determine critical temperatures of liquid formulations. It has been used in this work to determine the sublimation rates of a biological tissue freeze-drying. The sublimation rates were measured from the speed of the moving interface between the dried and the frozen layer under 21.33, 42.66 and 63.99 Pa. The studied variables were used in a theoretical model to simulate various temperature profiles of the freeze-drying process. Good agreement between the experimental and the simulated results was found.
Resumo:
A mathematical model and numerical simulations are presented to investigate the dynamics of gas, oil and water flow in a pipeline-riser system. The pipeline is modeled as a lumped parameter system and considers two switchable states: one in which the gas is able to penetrate into the riser and another in which there is a liquid accumulation front, preventing the gas from penetrating the riser. The riser model considers a distributed parameter system, in which movable nodes are used to evaluate local conditions along the subsystem. Mass transfer effects are modeled by using a black oil approximation. The model predicts the liquid penetration length in the pipeline and the liquid level in the riser, so it is possible to determine which type of severe slugging occurs in the system. The method of characteristics is used to simplify the differentiation of the resulting hyperbolic system of equations. The equations are discretized and integrated using an implicit method with a predictor-corrector scheme for the treatment of the nonlinearities. Simulations corresponding to severe slugging conditions are presented and compared to results obtained with OLGA computer code, showing a very good agreement. A description of the types of severe slugging for the three-phase flow of gas, oil and water in a pipeline-riser system with mass transfer effects are presented, as well as a stability map. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Solar reactors can be attractive in photodegradation processes due to lower electrical energy demand. The performance of a solar reactor for two flow configurations, i.e., plug flow and mixed flow, is compared based on experimental results with a pilot-scale solar reactor. Aqueous solutions of phenol were used as a model for industrial wastewater containing organic contaminants. Batch experiments were carried out under clear sky, resulting in removal rates in the range of 96100?%. The dissolved organic carbon removal rate was simulated by an empirical model based on neural networks, which was adjusted to the experimental data, resulting in a correlation coefficient of 0.9856. This approach enabled to estimate effects of process variables which could not be evaluated from the experiments. Simulations with different reactor configurations indicated relevant aspects for the design of solar reactors.
Resumo:
Abstract Background To understand the molecular mechanisms underlying important biological processes, a detailed description of the gene products networks involved is required. In order to define and understand such molecular networks, some statistical methods are proposed in the literature to estimate gene regulatory networks from time-series microarray data. However, several problems still need to be overcome. Firstly, information flow need to be inferred, in addition to the correlation between genes. Secondly, we usually try to identify large networks from a large number of genes (parameters) originating from a smaller number of microarray experiments (samples). Due to this situation, which is rather frequent in Bioinformatics, it is difficult to perform statistical tests using methods that model large gene-gene networks. In addition, most of the models are based on dimension reduction using clustering techniques, therefore, the resulting network is not a gene-gene network but a module-module network. Here, we present the Sparse Vector Autoregressive model as a solution to these problems. Results We have applied the Sparse Vector Autoregressive model to estimate gene regulatory networks based on gene expression profiles obtained from time-series microarray experiments. Through extensive simulations, by applying the SVAR method to artificial regulatory networks, we show that SVAR can infer true positive edges even under conditions in which the number of samples is smaller than the number of genes. Moreover, it is possible to control for false positives, a significant advantage when compared to other methods described in the literature, which are based on ranks or score functions. By applying SVAR to actual HeLa cell cycle gene expression data, we were able to identify well known transcription factor targets. Conclusion The proposed SVAR method is able to model gene regulatory networks in frequent situations in which the number of samples is lower than the number of genes, making it possible to naturally infer partial Granger causalities without any a priori information. In addition, we present a statistical test to control the false discovery rate, which was not previously possible using other gene regulatory network models.
Resumo:
A systematic approach to model nonlinear systems using norm-bounded linear differential inclusions (NLDIs) is proposed in this paper. The resulting NLDI model is suitable for the application of linear control design techniques and, therefore, it is possible to fulfill certain specifications for the underlying nonlinear system, within an operating region of interest in the state-space, using a linear controller designed for this NLDI model. Hence, a procedure to design a dynamic output feedback controller for the NLDI model is also proposed in this paper. One of the main contributions of the proposed modeling and control approach is the use of the mean-value theorem to represent the nonlinear system by a linear parameter-varying model, which is then mapped into a polytopic linear differential inclusion (PLDI) within the region of interest. To avoid the combinatorial problem that is inherent of polytopic models for medium- and large-sized systems, the PLDI is transformed into an NLDI, and the whole process is carried out ensuring that all trajectories of the underlying nonlinear system are also trajectories of the resulting NLDI within the operating region of interest. Furthermore, it is also possible to choose a particular structure for the NLDI parameters to reduce the conservatism in the representation of the nonlinear system by the NLDI model, and this feature is also one important contribution of this paper. Once the NLDI representation of the nonlinear system is obtained, the paper proposes the application of a linear control design method to this representation. The design is based on quadratic Lyapunov functions and formulated as search problem over a set of bilinear matrix inequalities (BMIs), which is solved using a two-step separation procedure that maps the BMIs into a set of corresponding linear matrix inequalities. Two numerical examples are given to demonstrate the effectiveness of the proposed approach.
Resumo:
Polynomial Chaos Expansion (PCE) is widely recognized as a flexible tool to represent different types of random variables/processes. However, applications to real, experimental data are still limited. In this article, PCE is used to represent the random time-evolution of metal corrosion growth in marine environments. The PCE coefficients are determined in order to represent data of 45 corrosion coupons tested by Jeffrey and Melchers (2001) at Taylors Beach, Australia. Accuracy of the representation and possibilities for model extrapolation are considered in the study. Results show that reasonably accurate smooth representations of the corrosion process can be obtained. The representation is not better because a smooth model is used to represent non-smooth corrosion data. Random corrosion leads to time-variant reliability problems, due to resistance degradation over time. Time variant reliability problems are not trivial to solve, especially under random process loading. Two example problems are solved herein, showing how the developed PCE representations can be employed in reliability analysis of structures subject to marine corrosion. Monte Carlo Simulation is used to solve the resulting time-variant reliability problems. However, an accurate and more computationally efficient solution is also presented.
Resumo:
Micelles composed of amphiphilic copolymers linked to a radioactive element are used in nuclear medicine predominantly as a diagnostic application. A relevant advantage of polymeric micelles in aqueous solution is their resulting particle size, which can vary from 10 to 100 nm in diameter. In this review, polymeric micelles labeled with radioisotopes including technetium (99mTc) and indium (111In), and their clinical applications for several diagnostic techniques, such as single photon emission computed tomography (SPECT), gamma-scintigraphy, and nuclear magnetic resonance (NMR), were discussed. Also, micelle use primarily for the diagnosis of lymphatic ducts and sentinel lymph nodes received special attention. Notably, the employment of these diagnostic techniques can be considered a significant tool for functionally exploring body systems as well as investigating molecular pathways involved in the disease process. The use of molecular modeling methodologies and computer-aided drug design strategies can also yield valuable information for the rational design and development of novel radiopharmaceuticals.
Resumo:
The discovery and development of a new drug are time-consuming, difficult and expensive. This complex process has evolved from classical methods into an integration of modern technologies and innovative strategies addressed to the design of new chemical entities to treat a variety of diseases. The development of new drug candidates is often limited by initial compounds lacking reasonable chemical and biological properties for further lead optimization. Huge libraries of compounds are frequently selected for biological screening using a variety of techniques and standard models to assess potency, affinity and selectivity. In this context, it is very important to study the pharmacokinetic profile of the compounds under investigation. Recent advances have been made in the collection of data and the development of models to assess and predict pharmacokinetic properties (ADME - absorption, distribution, metabolism and excretion) of bioactive compounds in the early stages of drug discovery projects. This paper provides a brief perspective on the evolution of in silico ADME tools, addressing challenges, limitations, and opportunities in medicinal chemistry.