982 resultados para dimension reduction
Resumo:
The silver catalyzed, selective catalytic reduction (SCR) of nitrogen oxides (NOx) by CH4, is shown to be a structure-sensitive reaction. Pretreatment has a great affect on the catalytic performances. Upon thermal treatment in inert gas stream, thermal induced changes in silver morphology lead to the formation of reduced silver species of clusters and particles. Catalysis over this catalyst indicates an initially higher activity but lower selectivity for the CH4-SCR of NOx Reaction induced restructuring of silver results in the formation of ill-defined silver oxides. This, in turn, impacts the adsorption properties and diffusivity of oxygen over silver catalyst, results in the decrease in activity but increase in selectivity of Ag-H-ZSM-5 catalyst for the CH4-SCR of NO.. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
R. Jensen and Q. Shen. Semantics-Preserving Dimensionality Reduction: Rough and Fuzzy-Rough Based Approaches. IEEE Transactions on Knowledge and Data Engineering, 16(12): 1457-1471. 2004.
Resumo:
R. Jensen and Q. Shen, 'Fuzzy-Rough Data Reduction with Ant Colony Optimization,' Fuzzy Sets and Systems, vol. 149, no. 1, pp. 5-20, 2005.
Resumo:
R. Jensen and Q. Shen, 'Fuzzy-Rough Attribute Reduction with Application to Web Categorization,' Fuzzy Sets and Systems, vol. 141, no. 3, pp. 469-485, 2004.
Resumo:
R. Jensen, Q. Shen, Data Reduction with Rough Sets, In: Encyclopedia of Data Warehousing and Mining - 2nd Edition, Vol. II, 2008.
Resumo:
The aim of the present article is to analyse the Apology in its aspect of time. When defending himself against the charges, Socrates appeals to the past, the present and the future. Furthermore, the philosopher stresses the meaning of the duration of time. Thus, the seems to suggest that all really important activities demand a long time to benefit, since they are almost invariably connected with greater efforts. While the dialogue proves thereby to be an ethical one, the various time expressions also gain an ethical dimension.
Resumo:
To evaluate critical exposure levels and the reversibility of lead neurotoxicity a group of lead exposed foundry workers and an unexposed reference population were followed up for three years. During this period, tests designed to monitor neurobehavioural function and lead dose were administered. Evaluations of 160 workers during the first year showed dose dependent decrements in mood, visual/motor performance, memory, and verbal concept formation. Subsequently, an improvement in the hygienic conditions at the plant resulted in striking reductions in blood lead concentrations over the following two years. Attendant improvement in indices of tension (20% reduction), anger (18%), depression (26%), fatigue (27%), and confusion (13%) was observed. Performance on neurobehavioural testing generally correlated best with integrated dose estimates derived from blood lead concentrations measured periodically over the study period; zinc protoporphyrin levels were less well correlated with function. This investigation confirms the importance of compliance with workplace standards designed to lower exposures to ensure that individual blood lead concentrations remain below 50 micrograms/dl.
Resumo:
Two new notions of reduction for terms of the λ-calculus are introduced and the question of whether a λ-term is beta-strongly normalizing is reduced to the question of whether a λ-term is merely normalizing under one of the new notions of reduction. This leads to a new way to prove beta-strong normalization for typed λ-calculi. Instead of the usual semantic proof style based on Girard's "candidats de réductibilité'', termination can be proved using a decreasing metric over a well-founded ordering in a style more common in the field of term rewriting. This new proof method is applied to the simply-typed λ-calculus and the system of intersection types.
Resumo:
A simple experiment to demonstrate nucleophilic addition to a carbonyl. Sodium borohydride-mediated reduction of fluorenone is a fast and high-yielding reaction that is suitable for beginning students. Students isolate their fluorenol product by recrystallization and characterize it by NMR and IR.
Resumo:
This is an addendum to our technical report BUCS TR-94-014 of December 19, 1994. It clarifies some statements, adds information on some related research, includes a comparison with research be de Groote, and fixes two minor mistakes in a proof.
Resumo:
We define a unification problem ^UP with the property that, given a pure lambda-term M, we can derive an instance Gamma(M) of ^UP from M such that Gamma(M) has a solution if and only if M is beta-strongly normalizable. There is a type discipline for pure lambda-terms that characterizes beta-strong normalization; this is the system of intersection types (without a "top" type that can be assigned to every lambda-term). In this report, we use a lean version LAMBDA of the usual system of intersection types. Hence, ^UP is also an appropriate unification problem to characterize typability of lambda-terms in LAMBDA. It also follows that ^UP is an undecidable problem, which can in turn be related to semi-unification and second-order unification (both known to be undecidable).
Resumo:
To provide real-time service or engineer constrained-based paths, networks require the underlying routing algorithm to be able to find low-cost paths that satisfy given Quality-of-Service (QoS) constraints. However, the problem of constrained shortest (least-cost) path routing is known to be NP-hard, and some heuristics have been proposed to find a near-optimal solution. However, these heuristics either impose relationships among the link metrics to reduce the complexity of the problem which may limit the general applicability of the heuristic, or are too costly in terms of execution time to be applicable to large networks. In this paper, we focus on solving the delay-constrained minimum-cost path problem, and present a fast algorithm to find a near-optimal solution. This algorithm, called DCCR (for Delay-Cost-Constrained Routing), is a variant of the k-shortest path algorithm. DCCR uses a new adaptive path weight function together with an additional constraint imposed on the path cost, to restrict the search space. Thus, DCCR can return a near-optimal solution in a very short time. Furthermore, we use the method proposed by Blokh and Gutin to further reduce the search space by using a tighter bound on path cost. This makes our algorithm more accurate and even faster. We call this improved algorithm SSR+DCCR (for Search Space Reduction+DCCR). Through extensive simulations, we confirm that SSR+DCCR performs very well compared to the optimal but very expensive solution.
Resumo:
Two classes of techniques have been developed to whiten the quantization noise in digital delta-sigma modulators (DDSMs): deterministic and stochastic. In this two-part paper, a design methodology for reduced-complexity DDSMs is presented. The design methodology is based on error masking. Rules for selecting the word lengths of the stages in multistage architectures are presented. We show that the hardware requirement can be reduced by up to 20% compared with a conventional design, without sacrificing performance. Simulation and experimental results confirm theoretical predictions. Part I addresses MultistAge noise SHaping (MASH) DDSMs; Part II focuses on single-quantizer DDSMs..
Resumo:
Selective isoelectric whey protein precipitation and aggregation is carried out at laboratory scale in a standard configuration batch agitation vessel. Geometric scale-up of this operation is implemented on the basis of constant impeller power input per unit volume and subsequent clarification is achieved by high speed disc-stack centrifugation. Particle size and fractal geometry are important in achieving efficient separation while aggregates need to be strong enough to resist the more extreme levels of shear that are encountered during processing, for example through pumps, valves and at the centrifuge inlet zone. This study investigates how impeller agitation intensity and ageing time affect aggregate size, strength, fractal dimension and hindered settling rate at laboratory scale in order to determine conditions conducive for improved separation. Particle strength is measured by observing the effects of subjecting aggregates to moderate and high levels of process shear in a capillary rig and through a partially open ball-valve respectively. The protein precipitate yield is also investigated with respect to ageing time and impeller agitation intensity. A pilot scale study is undertaken to investigate scale-up and how agitation vessel shear affects centrifugal separation efficiency. Laboratory scale studies show that precipitates subject to higher impeller shear-rates during the addition of the precipitation agent are smaller but more compact than those subject to lower impeller agitation and are better able to resist turbulent breakage. They are thus more likely to provide a better feed for more efficient centrifugal separation. Protein precipitation yield improves significantly with ageing, and 50 minutes of ageing is required to obtain a 70 - 80% yield of α-lactalbumin. Geometric scale-up of the agitation vessel at constant power per unit volume results in aggregates of broadly similar size exhibiting similar trends but with some differences due to the absence of dynamic similarity due to longer circulation time and higher tip speed in the larger vessel. Disc stack centrifuge clarification efficiency curves show aggregates formed at higher shear-rates separate more efficiently, in accordance with laboratory scale projections. Exposure of aggregates to highly turbulent conditions, even for short exposure times, can lead to a large reduction in particle size. Thus, improving separation efficiencies can be achieved by the identification of high shear zones in a centrifugal process and the subsequent elimination or amelioration of such.