903 resultados para Algebraic path formulation
Resumo:
Pd-supported on WO3-ZrO2 (W/Zr atomic ratio=0.2) calcined at 1073 K was found to be highly active and selective for gas-phase oxidation of ethylene to acetic acid in the presence of water at 423 K and 0.6 MPa. Contact time dependence demonstrated that acetic acid is formed via acetaldehyde formed by a Wacker-type reaction, not through ethanol by hydration of ethylene.
Resumo:
Concentrating solar power is an important way of providing renewable energy. Model simulation approaches play a fundamental role in the development of this technology and, for this, an accurately validation of the models is crucial. This work presents the validation of the heat loss model of the absorber tube of a parabolic trough plant by comparing the model heat loss estimates with real measurements in a specialized testing laboratory. The study focuses on the implementation in the model of a physical-meaningful and widely valid formulation of the absorber total emissivity depending on the surface’s temperature. For this purpose, the spectral emissivity of several absorber’s samples are measured and, with these data, the absorber total emissivity curve is obtained according to Planck function. This physical-meaningful formulation is used as input parameter in the heat loss model and a successful validation of the model is performed. Since measuring the spectral emissivity of the absorber surface may be complex and it is sample-destructive, a new methodology for the absorber’s emissivity characterization is proposed. This methodology provides an estimation of the absorber total emissivity, retaining its physical meaning and widely valid formulation according to Planck function with no need for direct spectral measurements. This proposed method is also successfully validated and the results are shown in the present paper.
Resumo:
Accurate measurement of network bandwidth is crucial for flexible Internet applications and protocols which actively manage and dynamically adapt to changing utilization of network resources. These applications must do so to perform tasks such as distributing and delivering high-bandwidth media, scheduling service requests and performing admission control. Extensive work has focused on two approaches to measuring bandwidth: measuring it hop-by-hop, and measuring it end-to-end along a path. Unfortunately, best-practice techniques for the former are inefficient and techniques for the latter are only able to observe bottlenecks visible at end-to-end scope. In this paper, we develop and simulate end-to-end probing methods which can measure bottleneck bandwidth along arbitrary, targeted subpaths of a path in the network, including subpaths shared by a set of flows. As another important contribution, we describe a number of practical applications which we foresee as standing to benefit from solutions to this problem, especially in emerging, flexible network architectures such as overlay networks, ad-hoc networks, peer-to-peer architectures and massively accessed content servers.
Resumo:
System F is a type system that can be seen as both a proof system for second-order propositional logic and as a polymorphic programming language. In this work we explore several extensions of System F by types which express subtyping constraints. These systems include terms which represent proofs of subtyping relationships between types. Given a proof that one type is a subtype of another, one may use a coercion term constructor to coerce terms from the first type to the second. The ability to manipulate type constraints as first-class entities gives these systems a lot of expressive power, including the ability to encode generalized algebraic data types and intensional type analysis. The main contributions of this work are in the formulation of constraint types and a proof of strong normalization for an extension of System F with constraint types.
Resumo:
We study the problem of preprocessing a large graph so that point-to-point shortest-path queries can be answered very fast. Computing shortest paths is a well studied problem, but exact algorithms do not scale to huge graphs encountered on the web, social networks, and other applications. In this paper we focus on approximate methods for distance estimation, in particular using landmark-based distance indexing. This approach involves selecting a subset of nodes as landmarks and computing (offline) the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, we can estimate it quickly by combining the precomputed distances of the two nodes to the landmarks. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the suggested techniques is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach in the literature which considers selecting landmarks at random. Finally, we study applications of our method in two problems arising naturally in large-scale networks, namely, social search and community detection.
Resumo:
In this PhD study, mathematical modelling and optimisation of granola production has been carried out. Granola is an aggregated food product used in breakfast cereals and cereal bars. It is a baked crispy food product typically incorporating oats, other cereals and nuts bound together with a binder, such as honey, water and oil, to form a structured unit aggregate. In this work, the design and operation of two parallel processes to produce aggregate granola products were incorporated: i) a high shear mixing granulation stage (in a designated granulator) followed by drying/toasting in an oven. ii) a continuous fluidised bed followed by drying/toasting in an oven. In addition, the particle breakage of granola during pneumatic conveying produced by both a high shear granulator (HSG) and fluidised bed granulator (FBG) process were examined. Products were pneumatically conveyed in a purpose built conveying rig designed to mimic product conveying and packaging. Three different conveying rig configurations were employed; a straight pipe, a rig consisting two 45° bends and one with 90° bend. It was observed that the least amount of breakage occurred in the straight pipe while the most breakage occurred at 90° bend pipe. Moreover, lower levels of breakage were observed in two 45° bend pipe than the 90° bend vi pipe configuration. In general, increasing the impact angle increases the degree of breakage. Additionally for the granules produced in the HSG, those produced at 300 rpm have the lowest breakage rates while the granules produced at 150 rpm have the highest breakage rates. This effect clearly the importance of shear history (during granule production) on breakage rates during subsequent processing. In terms of the FBG there was no single operating parameter that was deemed to have a significant effect on breakage during subsequent conveying. A population balance model was developed to analyse the particle breakage occurring during pneumatic conveying. The population balance equations that govern this breakage process are solved using discretization. The Markov chain method was used for the solution of PBEs for this process. This study found that increasing the air velocity (by increasing the air pressure to the rig), results in increased breakage among granola aggregates. Furthermore, the analysis carried out in this work provides that a greater degree of breakage of granola aggregates occur in line with an increase in bend angle.
Resumo:
Cream liqueurs manufactured by a one-step process, where alcohol was added before homogenisation, were more stable than those processed by a two -step process which involved addition of alcohol after homogenisation. Using the one-step process, it was possible to produce creaming-stable liqueurs by using one pass through a homogeniser (27.6 MPa) equipped with "liquid whirl" valves. Test procedures to characterise cream liqueurs and to predict shelf life were studied in detail. A turbidity test proved simple, rapid and sensitive for characterising particle size and homogenisation efficiency. Prediction of age thickening/gelation in cream liqueurs during incubation at 45 °C depended on the age of the sample when incubated. Samples that gelled at 45 °C may not do so at ambient temperature. Commercial cream liqueurs were similar in gross chemical composition, and unlike experimentally produced liqueurs, these did not exhibit either age-gelation at ambient or elevated temperatures. Solutions of commercial sodium caseinates from different sources varied in their calcium sensitivity. When incorporated into cream liqueurs, caseinates influenced the rate of viscosity increase, coalescence and, possibly, gelation during incubated storage. Mild heat and alcohol treatment modified the properties of caseinate used to stabilise non-alcoholic emulsions, while the presence of alcohol in emulsions was important in preventing clustering of globules. The response to added trisodium citrate varied. In many cases, addition of the recommended level (0.18%) did not prevent gelation. Addition of small amounts of NaOH with 0.18 % trisodium citrate before homogenisation was beneficial. The stage at which citrate was added during processing was critical to the degree of viscosity increase (as opposed to gelation) in the product during 45 °C incubation. The component responsible for age-gelation was present in the milk-solids non fat portion of the cream and variations in the creams used were important in the age-gelation phenomenon Results indicated that, in addition to possibly Ca++, the micellar casein portion of serum may play a role in gelation. The role of the low molecular weight surfactants, sodium stearoyl lactylate and monodiglycerides in preventing gelation, was influenced by the presence of trisodium citrate. Clustering of fat globules and age-gelation were inhibited when 0.18 % citrate was included. Inclusion of sodium stearoyl lactylate, but not monodiglycerides, reduced the extent of viscosity increase at 45 °C in citrate containing liqueurs.
Resumo:
Ireland experienced two critical junctures when its economic survival was threatened: 1958/9 and 1986/7. Common to both crises was the supplanting of long established practices, that had become an integral part of the political culture of the state, by new ideas that ensured eventual economic recovery. In their adoption and implementation these ideas also fundamentally changed the institutions of state – how politics was done, how it was organised and regulated. The end result was the transformation of the Irish state. The main hypothesis of this thesis is that at those critical junctures the political and administrative elites who enabled economic recovery were not just making pragmatic decisions, their actions were influenced by ideas. Systematic content analysis of the published works of the main ideational actors, together with primary interviews with those actors still alive, reveals how their ideas were formed, what influenced them, and how they set about implementing their ideas. As the hypothesis assumes institutional change over time historical institutionalism serves as the theoretical framework. Central to this theory is the idea that choices made when a policy is being initiated or an institution formed will have a continuing influence long into the future. Institutions of state become ‘path dependent’ and impervious to change – the forces of inertia take over. That path dependency is broken at critical junctures. At those moments ideas play a major role as they offer a set of ready-made solutions. Historical institutionalism serves as a robust framework for proving that in the transformation of Ireland the role of ideas in punctuating institutional path dependency at critical junctures was central.
Resumo:
In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.
Resumo:
We study the response of dry granular materials to external stress using experiment, simulation, and theory. We derive a Ginzburg-Landau functional that enforces mechanical stability and positivity of contact forces. In this framework, the elastic moduli depend only on the applied stress. A combination of this feature and the positivity constraint leads to stress correlations whose shape and magnitude are extremely sensitive to the nature of the applied stress. The predictions from the theory describe the stress correlations for both simulations and experiments semiquantitatively. © 2009 The American Physical Society.
Resumo:
In this exploratory research we analyze the structure sense evidenced by 33 secondary students (16-18 years old) in tasks requiring to reproduce the structure of given algebraic expressions. The expressions used were algebraic fractions related to algebraic identities. There were big differences between the students performance which allowed differencing levels in students´ structure sense. Questions and conjectures to be addressed in future research are presented.
Resumo:
The formulation of the carrier-phase momentum and enthalpy source terms in mixed Lagrangian-Eulerian models of particle-laden flows is frequently reported inaccurately. Under certain circumstances, this can lead to erroneous implementations, which violate physical laws. A particle- rather than carrier-based approach is suggested for a consistent treatment of these terms.
Resumo:
The first phase in the sign, development and implementation of a comprehensive computational model of a copper stockpile leach process is presented. The model accounts for transport phenomena through the stockpile, reaction kinetics for the important mineral species, oxgen and bacterial effects on the leach reactions, plus heat, energy and acid balances for the overall leach process. The paper describes the formulation of the leach process model and its implementation in PHYSICA+, a computational fluid dynamic (CFD) software environment. The model draws on a number of phenomena to represent the competing physical and chemical features active in the process model. The phenomena are essentially represented by a three-phased (solid liquid gas) multi-component transport system; novel algorithms and procedures are required to solve the model equations, including a methodology for dealing with multiple chemical species with different reaction rates in ore represented by multiple particle size fractions. Some initial validation results and application simulations are shown to illustrate the potential of the model.
Resumo:
The purpose of the present study was to use attenuated total reflectance-Fourier transform infrared spectroscopy (ATR-FTIR) and target factor analysis (TFA) to investigate the permeation of model drugs and formulation components through Carbosil® membrane and human skin. Diffusion studies of saturated solutions in 50:50 water/ethanol of methyl paraben (MP), ibuprofen (IBU) and caffeine (CF) were performed on Carbosil® membrane. The spectroscopic data were analysed by target factor analysis, and evolution profiles of the signal for each component (i.e. the drug, water, ethanol and membrane) over time were obtained. Results showed that the data were successfully deconvoluted as correlations between factors from the data and reference spectra of the components, were above 0.8 in all cases. Good reproducibility over three runs for the evolution profiles was obtained. From the evolution profiles it was observed that water diffused better through the Carbosil® membrane than ethanol, confirming the hydrophilic properties of the Carbosil® membrane used. IBU diffused slower compared with MP and CF. The evolution profile of CF was very similar to that of water, probably because of the high solubility of CF in water, indicating that both compounds are diffusing concurrently. The second part of the work involved a study of the evolution profiles of the components of a commercial topical gel containing 5% (w/w) of ibuprofen as it permeated through human skin. Although the system was much more complex, data were still successfully deconvoluted and the different components of the formulation identified except for benzyl alcohol which might be attributed to the low concentrations of benzyl alcohol used in topical formulations. (C) 2009 Elsevier B.V. All rights reserved.