966 resultados para Dynamic processes
Resumo:
Business environments have become exceedingly dynamic and competitive in recent times. This dynamism is manifested in the form of changing process requirements and time constraints. Workflow technology is currently one of the most promising fields of research in business process automation. However, workflow systems to date do not provide the flexibility necessary to support the dynamic nature of business processes. In this paper we primarily discuss the issues and challenges related to managing change and time in workflows representing dynamic business processes. We also present an analysis of workflow modifications and provide feasibility considerations for the automation of this process.
Resumo:
This thesis describes the design and implementation of an interactive dynamic simulator called DASPRII. The starting point of this research has been an existing dynamic simulation package, DASP. DASPII is written in standard FORTRAN 77 and is implemented on universally available IBM-PC or compatible machines. It provides a means for the analysis and design of chemical processes. Industrial interest in dynamic simulation has increased due to the recent increase in concern over plant operability, resiliency and safety. DASPII is an equation oriented simulation package which allows solution of dynamic and steady state equations. The steady state can be used to initialise the dynamic simulation. A robust non linear algebraic equation solver has been implemented for steady state solution. This has increased the general robustness of DASPII, compared to DASP. A graphical front end is used to generate the process flowsheet topology from a user constructed diagram of the process. A conversational interface is used to interrogate the user with the aid of a database, to complete the topological information. An original modelling strategy implemented in DASPII provides a simple mechanism for parameter switching which creates a more flexible simulation environment. The problem description generated is by a further conversational procedure using a data-base. The model format used allows the same model equations to be used for dynamic and steady state solution. All the useful features of DASPI are retained in DASPII. The program has been demonstrated and verified using a number of example problems, Significant improvements using the new NLAE solver have been shown. Topics requiring further research are described. The benefits of variable switching in models has been demonstrated with a literature problem.
Resumo:
Purpose – The objective of this paper is to address the question whether and how firms can follow a standard management process to cope with emerging corporate social responsibility (CSR) challenges? Both researchers and practitioners have paid increasing attention to the question because of the rapidly evolving CSR expectations of stakeholders and the limited diffusion of CSR standardization. The question was addressed by developing a theoretical framework to explain how dynamic capabilities can contribute to effective CSR management. Design/methodology/approach – Based on 64 world-leading companies’ contemporary CSR reports, we carried out a large-scale content analysis to identify and examine the common organizational processes involved in CSR management and the dynamic capabilities underpinning those management processes. Findings – Drawing on the dynamic capabilities perspective, we demonstrate how the deployment of three dynamic capabilities for CSR management, namely, scanning, sensing and reconfiguration capabilities can help firms to meet emerging CSR requirements by following a set of common management processes. The findings demonstrate that what is more important in CSR standardization is the identification and development of the underlying dynamic capabilities and the related organizational processes and routines, rather than the detailed operational activities. Originality/value - Our study is an early attempt to examine the fundamental organizational capabilities and processes involved in CSR management from the dynamic capabilities perspective. Our research findings contribute to CSR standardization literature by providing a new theoretical perspective to better understand the capabilities enabling common CSR management processes.
Resumo:
Investigation of large, destructive earthquakes is challenged by their infrequent occurrence and the remote nature of geophysical observations. This thesis sheds light on the source processes of large earthquakes from two perspectives: robust and quantitative observational constraints through Bayesian inference for earthquake source models, and physical insights on the interconnections of seismic and aseismic fault behavior from elastodynamic modeling of earthquake ruptures and aseismic processes.
To constrain the shallow deformation during megathrust events, we develop semi-analytical and numerical Bayesian approaches to explore the maximum resolution of the tsunami data, with a focus on incorporating the uncertainty in the forward modeling. These methodologies are then applied to invert for the coseismic seafloor displacement field in the 2011 Mw 9.0 Tohoku-Oki earthquake using near-field tsunami waveforms and for the coseismic fault slip models in the 2010 Mw 8.8 Maule earthquake with complementary tsunami and geodetic observations. From posterior estimates of model parameters and their uncertainties, we are able to quantitatively constrain the near-trench profiles of seafloor displacement and fault slip. Similar characteristic patterns emerge during both events, featuring the peak of uplift near the edge of the accretionary wedge with a decay toward the trench axis, with implications for fault failure and tsunamigenic mechanisms of megathrust earthquakes.
To understand the behavior of earthquakes at the base of the seismogenic zone on continental strike-slip faults, we simulate the interactions of dynamic earthquake rupture, aseismic slip, and heterogeneity in rate-and-state fault models coupled with shear heating. Our study explains the long-standing enigma of seismic quiescence on major fault segments known to have hosted large earthquakes by deeper penetration of large earthquakes below the seismogenic zone, where mature faults have well-localized creeping extensions. This conclusion is supported by the simulated relationship between seismicity and large earthquakes as well as by observations from recent large events. We also use the modeling to connect the geodetic observables of fault locking with the behavior of seismicity in numerical models, investigating how a combination of interseismic geodetic and seismological estimates could constrain the locked-creeping transition of faults and potentially their co- and post-seismic behavior.
Resumo:
Background: Detailed analysis of the dynamic interactions among biological, environmental, social, and economic factors that favour the spread of certain diseases is extremely useful for designing effective control strategies. Diseases like tuberculosis that kills somebody every 15 seconds in the world, require methods that take into account the disease dynamics to design truly efficient control and surveillance strategies. The usual and well established statistical approaches provide insights into the cause-effect relationships that favour disease transmission but they only estimate risk areas, spatial or temporal trends. Here we introduce a novel approach that allows figuring out the dynamical behaviour of the disease spreading. This information can subsequently be used to validate mathematical models of the dissemination process from which the underlying mechanisms that are responsible for this spreading could be inferred. Methodology/Principal Findings: The method presented here is based on the analysis of the spread of tuberculosis in a Brazilian endemic city during five consecutive years. The detailed analysis of the spatio-temporal correlation of the yearly geo-referenced data, using different characteristic times of the disease evolution, allowed us to trace the temporal path of the aetiological agent, to locate the sources of infection, and to characterize the dynamics of disease spreading. Consequently, the method also allowed for the identification of socio-economic factors that influence the process. Conclusions/Significance: The information obtained can contribute to more effective budget allocation, drug distribution and recruitment of human skilled resources, as well as guiding the design of vaccination programs. We propose that this novel strategy can also be applied to the evaluation of other diseases as well as other social processes.
Resumo:
The concept of constitutional dynamic chemistry (CDC) based on the control of non-covalent interactions in supramolecular structures is promising for having a large impact on nanoscience and nanotechnology if adequate nanoscale manipulation methods are used. In this study, we demonstrate that the layer-by-layer (LbL) technique may be used to produce electroactive electrodes with ITO coated by tetrasulfonated nickel phthalocyanine (NiTsPc) alternated with poly(allylamine hydrochloride) (PAH) incorporating gold nanoparticles (AuNP), in which synergy has been achieved in the interaction between the nanoparticles and NiTsPc. The catalytic activity toward hydrogen peroxide (H(2)O(2)) in multilayer films was investigated using cyclic voltammetry, where oxidation of H(2)O(2) led to increased currents in the PAH-AuNP/NiTsPc films for the electrochemical processes associated with the phthalocyanine ring and nickel at 0.52 and 0.81 V vs. SCE, respectively, while for PAH/NiTsPc films (without AuNP) only the first redox process was affected. In control experiments we found out that the catalytic activity was not solely due to the presence of AuNP, but rather to the nanoparticles inducing NiTsPc supramolecular structures that favored access to their redox sites, thus yielding strong charge transfer. The combined effects of NiTsPc and AuNP, which could only be observed in nanostructured LbL films, point to another avenue to pursue within the CDC paradigm.
Resumo:
The extracellular hemoglobin of Glossoscolex paulistus (HbGp) is constituted of subunits containing heme groups, monomers and trimers, and nonheme structures, called linkers, and the whole protein has a minimum molecular mass near 3.1 x 10(6) Da. This and other proteins of the same family are useful model systems for developing blood substitutes due to their extracellular nature, large size, and resistance to oxidation. HbGp samples were studied by dynamic light scattering (DLS). In the pH range 6.0-8.0, HbGp is stable and has a monodisperse size distribution with a z-average hydrodynamic diameter (D-h) of 27 +/- 1 nm. A more alkaline pH induced an irreversible dissociation process, resulting in a smaller D-h of 10 +/- 1 nm. The decrease in D-h suggests a complete hemoglobin dissociation. Gel filtration chromatography was used to show unequivocally the oligomeric dissociation observed at alkaline pH. At pH 9.0, the dissociation kinetics is slow, taking a minimum of 24 h to be completed. Dissociation rate constants progressively increase at higher pH, becoming, at pH 10.5, not detectable by DILS. Protein temperature stability was also pH-dependent. Melting curves for HbGp showed oligomeric dissociation and protein denaturation as a function of pH. Dissociation temperatures were lower at higher pH. Kinetic studies were also performed using ultraviolet-visible absorption at the Soret band. Optical absorption monitors the hemoglobin autoxidation while DLS gives information regarding particle size changes in the process of protein dissociation. Absorption was analyzed at different pH values in the range 9.0-9.8 and at two temperatures, 25 degrees C and 38 degrees C. At 25 degrees C, for pH 9.0 and 9.3, the kinetics monitored by ultraviolet-visible absorption presents a monoexponential behavior, whereas for pH 9.6 and 9.8, a biexponential behavior was observed, consistent with heme heterogeneity at more alkaline pH. The kinetics at 38 degrees C is faster than that at 25 degrees C and is biexponential in the whole pH range. DLS dissociation rates are faster than the autoxidation dissociation rates at 25 degrees C. Autoxiclation and dissociation processes are intimately related, so that oligomeric protein dissociation promotes the increase of autoxidation rate and vice versa. The effect of dissociation is to change the kinetic character of the autoxidation of hemes from monoexponential to biexponential, whereas the reverse change is not as effective. This work shows that DLS can be used to follow, quantitatively and in real time, the kinetics of changes in the oligomerization of biologic complex supramolecular systems. Such information is relevant for the development of mimetic systems to be used as blood substitutes.
Resumo:
Conventional threading operations involve two distinct machining processes: drilling and threading. Therefore, it is time consuming for the tools must be changed and the workpiece has to be moved to another machine. This paper presents an analysis of the combined process (drilling followed by threading) using a single tool for both operations: the tap-milling tool. Before presenting the methodology used to evaluate this hybrid tool, the ODS (operating deflection shapes) basics is shortly described. ODS and finite element modeling (FEM) were used during this research to optimize the process aiming to achieve higher stable machining conditions and increasing the tool life. Both methods allowed the determination of the natural frequencies and displacements of the machining center and optimize the workpiece fixture system. The results showed that there is an excellent correlation between the dynamic stability of the machining center-tool holder and the tool life, avoiding a tool premature catastrophic failure. Nevertheless, evidence showed that the tool is very sensitive to work conditions. Undoubtedly, the use of ODS and FEM eliminate empiric decisions concerning the optimization of machining conditions and increase drastically the tool life. After the ODS and FEM studies, it was possible to optimize the process and work material fixture system and machine more than 30,000 threaded holes without reaching the tool life limit and catastrophic fail.
Resumo:
Coastal wetlands are dynamic and include the freshwater-intertidal interface. In many parts of the world such wetlands are under pressure from increasing human populations and from predicted sea-level rise. Their complexity and the limited knowledge of processes operating in these systems combine to make them a management challenge.Adaptive management is advocated for complex ecosystem management (Hackney 2000; Meretsky et al. 2000; Thom 2000;National Research Council 2003).Adaptive management identifies management aims,makes an inventory/environmental assessment,plans management actions, implements these, assesses outcomes, and provides feedback to iterate the process (Holling 1978;Walters and Holling 1990). This allows for a dynamic management system that is responsive to change. In the area of wetland management recent adaptive approaches are exemplified by Natuhara et al. (2004) for wild bird management, Bunch and Dudycha (2004) for a river system, Thom (2000) for restoration, and Quinn and Hanna (2003) for seasonal wetlands in California. There are many wetland habitats for which we currently have only rudimentary knowledge (Hackney 2000), emphasizing the need for good information as a prerequisite for effective management. The management framework must also provide a way to incorporate the best available science into management decisions and to use management outcomes as opportunities to improve scientific understanding and provide feedback to the decision system. Figure 9.1 shows a model developed by Anorov (2004) based on the process-response model of Maltby et al. (1994) that forms a framework for the science that underlies an adaptive management system in the wetland context.
Resumo:
Wet agglomeration processes have traditionally been considered an empirical art, with great difficulties in predicting and explaining observed behaviour. Industry has faced a range of problems including large recycle ratios, poor product quality control, surging and even the total failure of scale up from laboratory to full scale production. However, in recent years there has been a rapid advancement in our understanding of the fundamental processes that control granulation behaviour and product properties. This review critically evaluates the current understanding of the three key areas of wet granulation processes: wetting and nucleation, consolidation and growth, and breakage and attrition. Particular emphasis is placed on the fact that there now exist theoretical models which predict or explain the majority of experimentally observed behaviour. Provided that the correct material properties and operating parameters are known, it is now possible to make useful predictions about how a material will granulate. The challenge that now faces us is to transfer these theoretical developments into industrial practice. Standard, reliable methods need to be developed to measure the formulation properties that control granulation behaviour, such as contact angle and dynamic yield strength. There also needs to be a better understanding of the flow patterns, mixing behaviour and impact velocities in different types of granulation equipment. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
We are witnessing an enormous growth in biological nitrogen removal from wastewater. It presents specific challenges beyond traditional COD (carbon) removal. A possibility for optimised process design is the use of biomass-supporting media. In this paper, attached growth processes (AGP) are evaluated using dynamic simulations. The advantages of these systems that were qualitatively described elsewhere, are validated quantitatively based on a simulation benchmark for activated sludge treatment systems. This simulation benchmark is extended with a biofilm model that allows for fast and accurate simulation of the conversion of different substrates in a biofilm. The economic feasibility of this system is evaluated using the data generated with the benchmark simulations. Capital savings due to volume reduction and reduced sludge production are weighed out against increased aeration costs. In this evaluation, effluent quality is integrated as well.
Resumo:
This theoretical note describes an expansion of the behavioral prediction equation, in line with the greater complexity encountered in models of structured learning theory (R. B. Cattell, 1996a). This presents learning theory with a vector substitute for the simpler scalar quantities by which traditional Pavlovian-Skinnerian models have hitherto been represented. Structured learning can be demonstrated by vector changes across a range of intrapersonal psychological variables (ability, personality, motivation, and state constructs). Its use with motivational dynamic trait measures (R. B. Cattell, 1985) should reveal new theoretical possibilities for scientifically monitoring change processes (dynamic calculus model; R. B. Cattell, 1996b), such as encountered within psycho therapeutic settings (R. B. Cattell, 1987). The enhanced behavioral prediction equation suggests that static conceptualizations of personality structure such as the Big Five model are less than optimal.
Resumo:
A research program on atmospheric boundary layer processes and local wind regimes in complex terrain was conducted in the vicinity of Lake Tekapo in the southern Alps of New Zealand, during two 1-month field campaigns in 1997 and 1999. The effects of the interaction of thermal and dynamic forcing were of specific interest, with a particular focus on the interaction of thermal forcing of differing scales. The rationale and objectives of the field and modeling program are described, along with the methodology used to achieve them. Specific research aims include improved knowledge of the role of surface forcing associated with varying energy balances across heterogeneous terrain, thermal influences on boundary layer and local wind development, and dynamic influences of the terrain through channeling effects. Data were collected using a network of surface meteorological and energy balance stations, radiosonde and pilot balloon soundings, tethered balloon and kite-based systems, sodar, and an instrumented light aircraft. These data are being used to investigate the energetics of surface heat fluxes, the effects of localized heating/cooling and advective processes on atmospheric boundary layer development, and dynamic channeling. A complementary program of numerical modeling includes application of the Regional Atmospheric Modeling System (RAMS) to case studies characterizing typical boundary layer structures and airflow patterns observed around Lake Tekapo. Some initial results derived from the special observation periods are used to illustrate progress made to date. In spite of the difficulties involved in obtaining good data and undertaking modeling experiments in such complex terrain, initial results show that surface thermal heterogeneity has a significant influence on local atmospheric structure and wind fields in the vicinity of the lake. This influence occurs particularly in the morning. However, dynamic channeling effects and the larger-scale thermal effect of the mountain region frequently override these more local features later in the day.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do Grau de Mestre em Engenharia Informática