992 resultados para direct operational calculus


Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work aimed to evaluate the influence of specific operational conditions on the performance of a spiral-wound ultrafiltration pilot plant for direct drinking water treatment, installed at the Guarapiranga's reservoir, in the Sao Paulo Metropolitan Region. Results from operational tests showed that the volume of permeate produced in the combination of periodic relaxation with flushing and chlorine dosage procedures was 49% higher than the volume obtained when these procedures were not used. Two years of continuous operation demonstrated that the ultrafiltration pilot plant performed better during fall and winter seasons, higher permeate flow production and reduced chemical cleanings frequency. Observed behavior seems to be associated with the algae bloom events in the reservoir, which are more frequent during spring and summer seasons, confirmed by chlorophyll-a analysis results. Concentrate clarification using ferric chloride was quite effective in removing NOM and turbidity, allowing its recirculation to the ultrafiltration feed tank. This procedure made it possible to reach almost 99% water recovery considering a single 54-hour recirculation cycle. Water quality monitoring demonstrated that the ultrafiltration pilot plant was quite efficient, and that potential pathogenic organisms, Escherichia coil and total coliforms, turbidity and apparent color removals were 100%, 95.1%, and 91.5%, respectively. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to examine the role of three strategies - organisational, business and information system – in post implementation of technological innovations. The findings reported in the paper are that improvements in operational performance can only be achieved by aligning technological innovation effectiveness with operational effectiveness. Design/methodology/approach – A combination of qualitative and quantitative methods was used to apply a two-stage methodological approach. Unstructured and semi structured interviews, based on the findings of the literature, were used to identify key factors used in the survey instrument design. Confirmatory factor analysis (CFA) was used to examine structural relationships between the set of observed variables and the set of continuous latent variables. Findings – Initial findings suggest that organisations looking for improvements in operational performance through adoption of technological innovations need to align with operational strategies of the firm. Impact of operational effectiveness and technological innovation effectiveness are related directly and significantly to improved operational performance. Perception of increase of operational effectiveness is positively and significantly correlated with improved operational performance. The findings suggest that technological innovation effectiveness is also positively correlated with improved operational performance. However, the study found that there is no direct influence of strategiesorganisational, business and information systems (IS) - on improvement of operational performance. Improved operational performance is the result of interactions between the implementation of strategies and related outcomes of both technological innovation and operational effectiveness. Practical implications – Some organisations are using technological innovations such as enterprise information systems to innovate through improvements in operational performance. However, they often focus strategically only on effectiveness of technological innovation or on operational effectiveness. Such a focus will be detrimental in the long-term of the enterprise. This research demonstrated that it is not possible to achieve maximum returns through technological innovations as dimensions of operational effectiveness need to be aligned with technological innovations to improve their operational performance. Originality/value – No single technological innovation implementation can deliver a sustained competitive advantage; rather, an advantage is obtained through the capacity of an organisation to exploit technological innovations’ functionality on a continuous basis. To achieve sustainable results, technology strategy must be aligned with organisational and operational strategies. This research proposes the key performance objectives and dimensions that organisations should focus to achieve a strategic alignment. Research limitations/implications – The principal limitation of this study is that the findings are based on investigation of small sample size. There is a need to explore the appropriateness of influence of scale prior to generalizing the results of this study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this article is to examine the role of the alignment between technological innovation effectiveness and operational effectiveness after the implementation of enterprise information systems, and the impact of this alignment on the improvement in operational performance. Confirmatory factor analysis was used to examine structural relationships between the set of observed variables and the set of continuous latent variables. The findings from this research suggest that the dimensions stemming from technological innovation effectiveness such as system quality, information quality, service quality, user satisfaction and the performance objectives stemming from operational effectiveness such as cost, quality, reliability, flexibility and speed are important and significantly well-correlated factors. These factors promote the alignment between technological innovation effectiveness and operational effectiveness and should be the focus for managers in achieving effective implementation of technological innovations. In addition, there is a significant and direct influence of this alignment on the improvement of operational performance. The principal limitation of this study is that the findings are based on investigation of small sample size.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A direct borohydride fuel cell (DBFC) employing a poly (vinyl alcohol)hydrogel membrane electrolyte (PHME) is reported. The DBFC employs an AB(5) Misch metal alloy as anode and a goldplated stainless steel mesh as cathode in conjunction with aqueous alkaline solution of sodium borohydride as fuel and aqueous acidified solution of hydrogen peroxide as oxidant. Room temperature performances of the PHME-based DBFC in respect of peak power outputs; ex-situ cross-over of oxidant, fuel,anolyte and catholyte across the membrane electrolytes; utilization efficiencies of fuel and oxidant, as also cell performance durability are compared with a similar DBFC employing a NafionA (R)-117 membrane electrolyte (NME). Peak power densities of similar to 30 and similar to 40 mW cm(-2) are observed for the DBFCs with PHME and NME, respectively. The crossover of NaBH4 across both the membranes has been found to be very low. The utilization efficiencies of NaBH4 and H2O2 are found to be similar to 24 and similar to 59%, respectively for the PHME-based DBFC; similar to 18 and similar to 62%, respectively for the NME-based DBFC. The PHME and NME-based DBFCs exhibit operational cell potentials of similar to 1 center dot 2 and similar to 1 center dot 4 V, respectively at a load current density of 10 mA cm(-2) for similar to 100 h.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of denoising damage indicator signals for improved operational health monitoring of systems is addressed by applying soft computing methods to design filters. Since measured data in operational settings is contaminated with noise and outliers, pattern recognition algorithms for fault detection and isolation can give false alarms. A direct approach to improving the fault detection and isolation is to remove noise and outliers from time series of measured data or damage indicators before performing fault detection and isolation. Many popular signal-processing approaches do not work well with damage indicator signals, which can contain sudden changes due to abrupt faults and non-Gaussian outliers. Signal-processing algorithms based on radial basis function (RBF) neural network and weighted recursive median (WRM) filters are explored for denoising simulated time series. The RBF neural network filter is developed using a K-means clustering algorithm and is much less computationally expensive to develop than feedforward neural networks trained using backpropagation. The nonlinear multimodal integer-programming problem of selecting optimal integer weights of the WRM filter is solved using genetic algorithm. Numerical results are obtained for helicopter rotor structural damage indicators based on simulated frequencies. Test signals consider low order polynomial growth of damage indicators with time to simulate gradual or incipient faults and step changes in the signal to simulate abrupt faults. Noise and outliers are added to the test signals. The WRM and RBF filters result in a noise reduction of 54 - 71 and 59 - 73% for the test signals considered in this study, respectively. Their performance is much better than the moving average FIR filter, which causes significant feature distortion and has poor outlier removal capabilities and shows the potential of soft computing methods for specific signal-processing applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of denoising damage indicator signals for improved operational health monitoring of systems is addressed by applying soft computing methods to design filters. Since measured data in operational settings is contaminated with noise and outliers, pattern recognition algorithms for fault detection and isolation can give false alarms. A direct approach to improving the fault detection and isolation is to remove noise and outliers from time series of measured data or damage indicators before performing fault detection and isolation. Many popular signal-processing approaches do not work well with damage indicator signals, which can contain sudden changes due to abrupt faults and non-Gaussian outliers. Signal-processing algorithms based on radial basis function (RBF) neural network and weighted recursive median (WRM) filters are explored for denoising simulated time series. The RBF neural network filter is developed using a K-means clustering algorithm and is much less computationally expensive to develop than feedforward neural networks trained using backpropagation. The nonlinear multimodal integer-programming problem of selecting optimal integer weights of the WRM filter is solved using genetic algorithm. Numerical results are obtained for helicopter rotor structural damage indicators based on simulated frequencies. Test signals consider low order polynomial growth of damage indicators with time to simulate gradual or incipient faults and step changes in the signal to simulate abrupt faults. Noise and outliers are added to the test signals. The WRM and RBF filters result in a noise reduction of 54 - 71 and 59 - 73% for the test signals considered in this study, respectively. Their performance is much better than the moving average FIR filter, which causes significant feature distortion and has poor outlier removal capabilities and shows the potential of soft computing methods for specific signal-processing applications. (C) 2005 Elsevier B. V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Results on the performance of a 25 cm(2) liquid-feed solid-polymer-electrolyte direct methanol fuel cell (SPE-DMFC), operating under near-ambient conditions, are reported. The SPE-DMFC can yield a maximum power density of c. 200 mW cm(-2) at 90 C while operating with 1 M aqueous methanol and oxygen under ambient pressure. While operating the SPE-DMFC under similar conditions with air, a maximum power density of ca. 100 mW cm(-2) is achieved. Analysis of the electrode reaction kinetics parameters on the methanol electrode suggests that the reaction mechanism for methanol oxidation remains invariant with temperature. Durability data on the SPE-DMFC at an operational current density of 100 mA cm(-2) have also been obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we propose a new approach to deduction methods for temporal logic. Our proposal is based on an inductive definition of eventualities that is different from the usual one. On the basis of this non-customary inductive definition for eventualities, we first provide dual systems of tableaux and sequents for Propositional Linear-time Temporal Logic (PLTL). Then, we adapt the deductive approach introduced by means of these dual tableau and sequent systems to the resolution framework and we present a clausal temporal resolution method for PLTL. Finally, we make use of this new clausal temporal resolution method for establishing logical foundations for declarative temporal logic programming languages. The key element in the deduction systems for temporal logic is to deal with eventualities and hidden invariants that may prevent the fulfillment of eventualities. Different ways of addressing this issue can be found in the works on deduction systems for temporal logic. Traditional tableau systems for temporal logic generate an auxiliary graph in a first pass.Then, in a second pass, unsatisfiable nodes are pruned. In particular, the second pass must check whether the eventualities are fulfilled. The one-pass tableau calculus introduced by S. Schwendimann requires an additional handling of information in order to detect cyclic branches that contain unfulfilled eventualities. Regarding traditional sequent calculi for temporal logic, the issue of eventualities and hidden invariants is tackled by making use of a kind of inference rules (mainly, invariant-based rules or infinitary rules) that complicates their automation. A remarkable consequence of using either a two-pass approach based on auxiliary graphs or aone-pass approach that requires an additional handling of information in the tableau framework, and either invariant-based rules or infinitary rules in the sequent framework, is that temporal logic fails to carry out the classical correspondence between tableaux and sequents. In this thesis, we first provide a one-pass tableau method TTM that instead of a graph obtains a cyclic tree to decide whether a set of PLTL-formulas is satisfiable. In TTM tableaux are classical-like. For unsatisfiable sets of formulas, TTM produces tableaux whose leaves contain a formula and its negation. In the case of satisfiable sets of formulas, TTM builds tableaux where each fully expanded open branch characterizes a collection of models for the set of formulas in the root. The tableau method TTM is complete and yields a decision procedure for PLTL. This tableau method is directly associated to a one-sided sequent calculus called TTC. Since TTM is free from all the structural rules that hinder the mechanization of deduction, e.g. weakening and contraction, then the resulting sequent calculus TTC is also free from this kind of structural rules. In particular, TTC is free of any kind of cut, including invariant-based cut. From the deduction system TTC, we obtain a two-sided sequent calculus GTC that preserves all these good freeness properties and is finitary, sound and complete for PLTL. Therefore, we show that the classical correspondence between tableaux and sequent calculi can be extended to temporal logic. The most fruitful approach in the literature on resolution methods for temporal logic, which was started with the seminal paper of M. Fisher, deals with PLTL and requires to generate invariants for performing resolution on eventualities. In this thesis, we present a new approach to resolution for PLTL. The main novelty of our approach is that we do not generate invariants for performing resolution on eventualities. Our method is based on the dual methods of tableaux and sequents for PLTL mentioned above. Our resolution method involves translation into a clausal normal form that is a direct extension of classical CNF. We first show that any PLTL-formula can be transformed into this clausal normal form. Then, we present our temporal resolution method, called TRS-resolution, that extends classical propositional resolution. Finally, we prove that TRS-resolution is sound and complete. In fact, it finishes for any input formula deciding its satisfiability, hence it gives rise to a new decision procedure for PLTL. In the field of temporal logic programming, the declarative proposals that provide a completeness result do not allow eventualities, whereas the proposals that follow the imperative future approach either restrict the use of eventualities or deal with them by calculating an upper bound based on the small model property for PLTL. In the latter, when the length of a derivation reaches the upper bound, the derivation is given up and backtracking is used to try another possible derivation. In this thesis we present a declarative propositional temporal logic programming language, called TeDiLog, that is a combination of the temporal and disjunctive paradigms in Logic Programming. We establish the logical foundations of our proposal by formally defining operational and logical semantics for TeDiLog and by proving their equivalence. Since TeDiLog is, syntactically, a sublanguage of PLTL, the logical semantics of TeDiLog is supported by PLTL logical consequence. The operational semantics of TeDiLog is based on TRS-resolution. TeDiLog allows both eventualities and always-formulas to occur in clause heads and also in clause bodies. To the best of our knowledge, TeDiLog is the first declarative temporal logic programming language that achieves this high degree of expressiveness. Since the tableau method presented in this thesis is able to detect that the fulfillment of an eventuality is prevented by a hidden invariant without checking for it by means of an extra process, since our finitary sequent calculi do not include invariant-based rules and since our resolution method dispenses with invariant generation, we say that our deduction methods are invariant-free.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A technique enabling 10 Gbps data to be directly modulated onto a monolithic sub-THz dual laser transmitter is proposed. As a result of the laser chirp, the logical zeros of the resultant sub-THz signal have a different peak frequency from that of the logical ones. The signal extinction ratio is therefore enhanced by suppressing the logical zeros with a filter stage at the receiver. With the aid of the chirp-enhanced filtering, an improved extinction ratio can be achieved at moderate modulation current. Hence, 10 GHz modulation bandwidth of the transmitter is predicted without the need for external modulators. In this paper, we demonstrate the operational principle by generating an error-free (bit error rate less than 10-9) 100 Mbps Manchester encoded signal with a centre frequency of 12 GHz within the bandwidth of an envelope detector, whilst direct modulation of a 100 GHz signal at data rates of up to 10 Gbps is simulated by using a transmission line model. This work could be a key technique for enabling monolithic sub-THz transmitters to be readily used in high speed wireless links. © 2013 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electrical circuit designers seldom create really new topologies or use old ones in a novel way. Most designs are known combinations of common configurations tailored for the particular problem at hand. In this thesis I show that much of the behavior of a designer engaged in such ordinary design can be modelled by a clearly defined computational mechanism executing a set of stylized rules. Each of my rules embodies a particular piece of the designer's knowledge. A circuit is represented as a hierarchy of abstract objects, each of which is composed of other objects. The leaves of this tree represent the physical devices from which physical circuits are fabricated. By analogy with context-free languages, a class of circuits is generated by a phrase-structure grammar of which each rule describes how one type of abstract object can be expanded into a combination of more concrete parts. Circuits are designed by first postulating an abstract object which meets the particular design requirements. This object is then expanded into a concrete circuit by successive refinement using rules of my grammar. There are in general many rules which can be used to expand a given abstract component. Analysis must be done at each level of the expansion to constrain the search to a reasonable set. Thus the rule of my circuit grammar provide constraints which allow the approximate qualitative analysis of partially instantiated circuits. Later, more careful analysis in terms of more concrete components may lead to the rejection of a line of expansion which at first looked promising. I provide special failure rules to direct the repair in this case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Direct chill (DC) casting is a core primary process in the production of aluminum ingots. However, its operational optimization is still under investigation with regard to a number of features, one of which is the issue of curvature at the base of the ingot. Analysis of these features requires a computational model of the process that accounts for the fluid flow, heat transfer, solidification phase change, and thermomechanical analysis. This article describes an integrated approach to the modeling of all the preceding phenomena and their interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Substantive evidence implicates vitamin D receptor (VDR) or its natural ligand 1a,25-(OH)2 D3 in modulation of tumor growth. However, both human and animal studies indicate tissue-specificity of effect. Epidemiological studies show both inverse and direct relationships between serum 25(OH)D levels and common solid cancers. VDR ablation affects carcinogen-induced tumorigenesis in a tissue-specific manner in model systems. Better understanding of the tissue-specificity of vitamin D-dependent molecular networks may provide insight into selective growth control by the seco-steroid, 1a,25-(OH)2 D3. This commentary considers complex factors that may influence the cell- or tissue-specificity of 1a,25-(OH)2 D3/VDR growth effects, including local synthesis, metabolism and transport of vitamin D and its metabolites, vitamin D receptor (VDR) expression and ligand-interactions, 1a,25-(OH)2 D3 genomic and non-genomic actions, Ca2+ flux, kinase activation, VDR interactions with activating and inhibitory vitamin D responsive elements (VDREs) within target gene promoters, VDR coregulator recruitment and differential effects on key downstream growth regulatory genes. We highlight some differences of VDR growth control relevant to colonic, esophageal, prostate, pancreatic and other cancers and assess the potential for development of selective prevention or treatment strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estudamos problemas do cálculo das variações e controlo óptimo no contexto das escalas temporais. Especificamente, obtemos condições necessárias de optimalidade do tipo de Euler–Lagrange tanto para lagrangianos dependendo de derivadas delta de ordem superior como para problemas isoperimétricos. Desenvolvemos também alguns métodos directos que permitem resolver determinadas classes de problemas variacionais através de desigualdades em escalas temporais. No último capítulo apresentamos operadores de diferença fraccionários e propomos um novo cálculo das variações fraccionário em tempo discreto. Obtemos as correspondentes condições necessárias de Euler– Lagrange e Legendre, ilustrando depois a teoria com alguns exemplos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fractional calculus of variations and fractional optimal control are generalizations of the corresponding classical theories, that allow problem modeling and formulations with arbitrary order derivatives and integrals. Because of the lack of analytic methods to solve such fractional problems, numerical techniques are developed. Here, we mainly investigate the approximation of fractional operators by means of series of integer-order derivatives and generalized finite differences. We give upper bounds for the error of proposed approximations and study their efficiency. Direct and indirect methods in solving fractional variational problems are studied in detail. Furthermore, optimality conditions are discussed for different types of unconstrained and constrained variational problems and for fractional optimal control problems. The introduced numerical methods are employed to solve some illustrative examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The formulation and performance of the Met Office visibility analysis and prediction system are described. The visibility diagnostic within the limited-area Unified Model is a function of humidity and a prognostic aerosol content. The aerosol model includes advection, industrial and general urban sources, plus boundary-layer mixing and removal by rain. The assimilation is a 3-dimensional variational scheme in which the visibility observation operator is a very nonlinear function of humidity, aerosol and temperature. A quality control scheme for visibility data is included. Visibility observations can give rise to humidity increments of significant magnitude compared with the direct impact of humidity observations. We present the results of sensitivity studies which show the contribution of different components of the system to improved skill in visibility forecasts. Visibility assimilation is most important within the first 6-12 hours of the forecast and for visibilities below 1 km, while modelling of aerosol sources and advection is important for slightly higher visibilities (1-5 km) and is still significant at longer forecast times