973 resultados para extended Hildebrand solubility approach
Resumo:
A common operation in wireless ad hoc networks is the flooding of broadcast messages to establish network topologies and routing tables. The flooding of broadcast messages is, however, a resource consuming process. It might require the retransmission of messages by most network nodes. It is, therefore, very important to optimize this operation. In this paper, we first analyze the multipoint relaying (MPR) flooding mechanism used by the Optimized Link State Routing (OLSR) protocol to distribute topology control (TC) messages among all the system nodes. We then propose a new flooding method, based on the fusion of two key concepts: distance-enabled multipoint relaying and connected dominating set (CDS) flooding. We present experimental simulationsthat show that our approach improves the performance of previous existing proposals.
Resumo:
Fuzzy set theory and Fuzzy logic is studied from a mathematical point of view. The main goal is to investigatecommon mathematical structures in various fuzzy logical inference systems and to establish a general mathematical basis for fuzzy logic when considered as multi-valued logic. The study is composed of six distinct publications. The first paper deals with Mattila'sLPC+Ch Calculus. THis fuzzy inference system is an attempt to introduce linguistic objects to mathematical logic without defining these objects mathematically.LPC+Ch Calculus is analyzed from algebraic point of view and it is demonstratedthat suitable factorization of the set of well formed formulae (in fact, Lindenbaum algebra) leads to a structure called ET-algebra and introduced in the beginning of the paper. On its basis, all the theorems presented by Mattila and many others can be proved in a simple way which is demonstrated in the Lemmas 1 and 2and Propositions 1-3. The conclusion critically discusses some other issues of LPC+Ch Calculus, specially that no formal semantics for it is given.In the second paper the characterization of solvability of the relational equation RoX=T, where R, X, T are fuzzy relations, X the unknown one, and o the minimum-induced composition by Sanchez, is extended to compositions induced by more general products in the general value lattice. Moreover, the procedure also applies to systemsof equations. In the third publication common features in various fuzzy logicalsystems are investigated. It turns out that adjoint couples and residuated lattices are very often present, though not always explicitly expressed. Some minor new results are also proved.The fourth study concerns Novak's paper, in which Novak introduced first-order fuzzy logic and proved, among other things, the semantico-syntactical completeness of this logic. He also demonstrated that the algebra of his logic is a generalized residuated lattice. In proving that the examination of Novak's logic can be reduced to the examination of locally finite MV-algebras.In the fifth paper a multi-valued sentential logic with values of truth in an injective MV-algebra is introduced and the axiomatizability of this logic is proved. The paper developes some ideas of Goguen and generalizes the results of Pavelka on the unit interval. Our proof for the completeness is purely algebraic. A corollary of the Completeness Theorem is that fuzzy logic on the unit interval is semantically complete if, and only if the algebra of the valuesof truth is a complete MV-algebra. The Compactness Theorem holds in our well-defined fuzzy sentential logic, while the Deduction Theorem and the Finiteness Theorem do not. Because of its generality and good-behaviour, MV-valued logic can be regarded as a mathematical basis of fuzzy reasoning. The last paper is a continuation of the fifth study. The semantics and syntax of fuzzy predicate logic with values of truth in ana injective MV-algerba are introduced, and a list of universally valid sentences is established. The system is proved to be semanticallycomplete. This proof is based on an idea utilizing some elementary properties of injective MV-algebras and MV-homomorphisms, and is purely algebraic.
Resumo:
The abandonment of agricultural land in mountainous areas has been an outstanding problem along the last century and has captured the attention of scientists, technicians and administrations, for the dramatic consequences sometimes occurred due to soil instability, steep slopes, rainfall regimes and wildfires. Hidromorfological and pedological alterations causing exceptional floods and accelerated erosion processes has therefore been studied, identifying the cause in the loss of landscape heterogeneity. Through the disappearance of agricultural works and drainage maintenance, slope stability has resulted severely affected. The mechanization of agriculture has caused the displacement of vines, olives and corks trees cultivation in terraced areas along the Mediterranean catchment towards more economically suitable areas. On the one hand, land use and management changes have implicated sociological changes as well, transforming areas inhabited by agricultural communities into deserted areas where the colonization of disorganized spontaneous vegetation has buried a valuable rural patrimony. On the other hand, lacking of planning and management of the abandoned areas has produced badlands and infertile soils due to wildfire and high erosion rates strongly degrading the whole ecosystems. In other cases, after land abandonment a process of soil regeneration has been recorded. Investigations have been conducted in a part of NE Spain where extended areas of terraced soils previously cultivated have been abandoned in the last century. The selected environments were semi-abandoned vineyards, semi-abandoned olive groves, abandoned stands of cork trees, abandoned stands of pine trees, scrubland of Cistaceaea, scrubland of Ericaceaea, and pasture. The research work was focused on the study of most relevant physical, chemical and biological soil properties, as well as runoff and erosion under soils with different plant cover to establish the abandonment effect on soil quality, due to the peculiarity and vulnerability of these soils with a much reduced depth. The period of observation was carried out from autumn 2009 to autumn 2010. The sediment concentration of soil erosion under vines was recorded as 34.52 g/l while under pasture it was 4.66 g/l. In addition, the soil under vines showed the least amount of organic matter, which was 12 times lower than all other soil environments. The carbon dioxide (CO2) and total glomalin (TG) ratio to soil organic carbon (SOC) in this soil was 0.11 and 0.31 respectively. However, the soil under pasture contained a higher amount of organic matter and showed that the CO2 and TG ratio to SOC was 0.02 and 0.11 respectively indicating that the soil under pasture better preserves the soil carbon pool. A similar trend was found in the intermediate soils in the sequence of land use change and abandonment. Soil structural stability increased in the two soil fractions investigated (0.25-2.00 mm, 2.0-5.6 mm) especially in those soils that did not undergo periodical perturbations like wildfires. Soil quality indexes were obtained by using relevant physical and chemical soil parameters. Factor analysis carried out to study the relationship between all soil parameters allowed to related variables and environments and identify those areas that better contribute to soil quality towards others that may need more attention to avoid further degradation processes
Resumo:
Global warming mitigation has recently become a priority worldwide. A large body of literature dealing with energy related problems has focused on reducing greenhouse gases emissions at an engineering scale. In contrast, the minimization of climate change at a wider macroeconomic level has so far received much less attention. We investigate here the issue of how to mitigate global warming by performing changes in an economy. To this end, we make use of a systematic tool that combines three methods: linear programming, environmentally extended input output models, and life cycle assessment principles. The problem of identifying key economic sectors that contribute significantly to global warming is posed in mathematical terms as a bi criteria linear program that seeks to optimize simultaneously the total economic output and the total life cycle CO2 emissions. We have applied this approach to the European Union economy, finding that significant reductions in global warming potential can be attained by regulating specific economic sectors. Our tool is intended to aid policymakers in the design of more effective public policies for achieving the environmental and economic targets sought.
Resumo:
The synthesis of layered double hydroxides (LDHs) by hydrothermal-LDH reconstruction and coprecipitation methods is reviewed using a thermodynamic approach. A mixture model was used for the estimation of the thermodynamics of formation of LDHs. The synthesis and solubility of LDHs are discussed in terms of standard molar Gibbs free energy change of reaction. Data for numerous divalent and trivalent metals as well as for some monovalent and tetravalent metals that may be part of the LDH structure have been compiled. Good agreement is found between theoretical and experimental data. Diagrams and tables for the prediction of possible new LDH materials are provided.
Resumo:
This work evaluated the use of the Hildebrand/Hansen solubility parameters for selection of solvents for extraction of the organochlorine pesticides pp' DDT, pp' DDE, Aldrin and a-Endossulfan from soil using columns packed with Al2O3. The mixtures hexane:dichloromethane (7:3; v/v), hexane:acetonitrile (1:1; v/v), hexane:acetone (1:1; v/v) and pure hexane were chosen as extracting solutions. In the addition and recovery tests, different extraction solutions provided high recoveries percentages (>75%) with coefficients of variation below 15%. The recoveries are in agreement with the Hildebrand/Hansen parameters, demonstrating its applicability in the selection of extracting solution and in the replacement of toxic solvents, as dichloromethane
Resumo:
The Dutch disease is a major market failure originated in the existence of cheap and abundant natural or human resources that keep overvalued the currency of a country for an undetermined period of time, thus turning non profitable the production of tradable goods using technology in the state-of-the-art. It is an obstacle to growth on the demand side, because it limits investment opportunities. The severity of the Dutch disease varies according to the extent of the Ricardian rents involved, i.e., according to the difference between two exchange rate equilibriums: the current or market rate and the industrial rate - the one that make viable efficient tradable industries. Its main symptoms, besides overvalued currency, are low rates of growth of the manufacturing industry, artificially high real wages, and unemployment. Its neutralization requires managing the exchange rate. The principal instrument for that is a sales or export tax on the commodities that give origin to the Dutch disease. In order to neutralize it policymakers face major political obstacles since it involves taxing exports and reducing wages. Finally, this papers argues that there is an extended concept of Dutch disease: besides having its origin in natural resources, it may arise from cheap labor provided that the wage spread in the developing country is considerably larger than in the developed one - a condition that is usually present.
Resumo:
The experience of a strong sense of community developed while participating in extended wilderness expeditions is one of the most significant and meaningful experiences associated with taking part in this form of outdoor recreation. The experience of returning to a home community from an extended wilderness expedition is explored through the impacts associated with psychological sense of community (McMillian & Chavis, 1986; McMillian, 1996). A phenomenological approach was used to investigate the re-entry experiences of six individuals through the use of semi-structured interviews. Twelve main themes and seventeen subthemes emerged within the findings and illustrate a lack of preparation for the difficulties associated with re-entry, negative impacts associated with the experience of sense of community, and problems transferring aspects of a wilderness community into participant’s post-expedition lives.
Resumo:
This thesis describes work towards the total synthesis of a 7-aza analogue of the Amaryllidaceae alkaloid narciclasine, a potent anticancer compound which suffers from a poor solubility profile. A key strategy in the formation of the C-ring is the biotransformation of bromobenzene by E.coli JM109. The densely substituted heterocyclic A-ring is obtained by sequential directed ortho-metalation and the fragment union accomplished with an amide coupling and subsequent intramolecular Heck reaction.
Resumo:
The recently developed variational Wigner-Kirkwood approach is extended to the relativistic mean field theory for finite nuclei. A numerical application to the calculation of the surface energy coefficient in semi-infinite nuclear matter is presented. The new method is contrasted with the standard density functional theory and the fully quantal approach.
Resumo:
In this paper we propose a generalization of the density functional theory. The theory leads to single-particle equations of motion with a quasilocal mean-field operator, which contains a quasiparticle position-dependent effective mass and a spin-orbit potential. The energy density functional is constructed using the extended Thomas-Fermi approximation and the ground-state properties of doubly magic nuclei are considered within the framework of this approach. Calculations were performed using the finite-range Gogny D1S forces and the results are compared with the exact Hartree-Fock calculations
Resumo:
We use a microscopic theory to describe the dynamics of the valence electrons in divalent-metal clusters. The theory is based on a many-body model Harniltonian H which takes into account, on the same electronic level, the van der Waals and the covalent bonding. In order to study the ground-state properties of H we have developed an extended slave-boson method. We have studied the bonding character and the degree of electronic delocalization in Hg_n clusters as a function of cluster size. Results show that, for increasing cluster size, an abrupt change occurs in the bond character from van der Waals to covalent bonding at a critical cluster size n_c ~ 10-20. This change also involves a transition from localized to delocalized valence electrons, as a consequence of the competition between both bonding mechanisms.
Resumo:
We solve an initial-boundary problem for the Klein-Gordon equation on the half line using the Riemann-Hilbert approach to solving linear boundary value problems advocated by Fokas. The approach we present can be also used to solve more complicated boundary value problems for this equation, such as problems posed on time-dependent domains. Furthermore, it can be extended to treat integrable nonlinearisations of the Klein-Gordon equation. In this respect, we briefly discuss how our results could motivate a novel treatment of the sine-Gordon equation.
Resumo:
A new primary model based on a thermodynamically consistent first-order kinetic approach was constructed to describe non-log-linear inactivation kinetics of pressure-treated bacteria. The model assumes a first-order process in which the specific inactivation rate changes inversely with the square root of time. The model gave reasonable fits to experimental data over six to seven orders of magnitude. It was also tested on 138 published data sets and provided good fits in about 70% of cases in which the shape of the curve followed the typical convex upward form. In the remainder of published examples, curves contained additional shoulder regions or extended tail regions. Curves with shoulders could be accommodated by including an additional time delay parameter and curves with tails shoulders could be accommodated by omitting points in the tail beyond the point at which survival levels remained more or less constant. The model parameters varied regularly with pressure, which may reflect a genuine mechanistic basis for the model. This property also allowed the calculation of (a) parameters analogous to the decimal reduction time D and z, the temperature increase needed to change the D value by a factor of 10, in thermal processing, and hence the processing conditions needed to attain a desired level of inactivation; and (b) the apparent thermodynamic volumes of activation associated with the lethal events. The hypothesis that inactivation rates changed as a function of the square root of time would be consistent with a diffusion-limited process.
Resumo:
This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.