934 resultados para Millionaire Problem, Efficiency, Verifiability, Zero Test, Batch Equation
Resumo:
The goal of this paper is to analyze the character of the first Hopf bifurcation (subcritical versus supercritical) that appears in a one-dimensional reaction-diffusion equation with nonlinear boundary conditions of logistic type with delay. We showed in the previous work [Arrieta et al., 2010] that if the delay is small, the unique non-negative equilibrium solution is asymptotically stable. We also showed that, as the delay increases and crosses certain critical value, this equilibrium becomes unstable and undergoes a Hopf bifurcation. This bifurcation is the first one of a cascade occurring as the delay goes to infinity. The structure of this cascade will depend on the parameters appearing in the equation. In this paper, we show that the first bifurcation that occurs is supercritical, that is, when the parameter is bigger than the delay bifurcation value, stable periodic orbits branch off from the constant equilibrium.
Resumo:
In this work we show that the eigenvalues of the Dirichlet problem for the biharmonic operator are generically simple in the set Of Z(2)-symmetric regions of R-n, n >= 2, with a suitable topology. To accomplish this, we combine Baire`s lemma, a generalised version of the transversality theorem, due to Henry [Perturbation of the boundary in boundary value problems of PDEs, London Mathematical Society Lecture Note Series 318 (Cambridge University Press, 2005)], and the method of rapidly oscillating functions developed in [A. L. Pereira and M. C. Pereira, Mat. Contemp. 27 (2004) 225-241].
Resumo:
We study the existence and stability of periodic travelling-wave solutions for generalized Benjamin-Bona-Mahony and Camassa-Holm equations. To prove orbital stability, we use the abstract results of Grillakis-Shatah-Strauss and the Floquet theory for periodic eigenvalue problems.
Resumo:
The pentrophic membrane (PM) is an anatomical structure surrounding the food bolus in most insects. Rejecting the idea that PM has evolved from coating mucus to play the same protective role as it, novel functions were proposed and experimentally tested. The theoretical principles underlying the digestive enzyme recycling mechanism were described and used to develop an algorithm to calculate enzyme distributions along the midgut and to infer secretory and absorptive sites. The activity of a Spodoptera frugiperda microvillar aminopeptidase decreases by 50% if placed in the presence of midgut contents. S. frugiperda trypsin preparations placed into dialysis bags in stirred and unstirred media have activities of 210 and 160%, respectively, over the activities of samples in a test tube. The ectoperitrophic fluid (EF) present in the midgut caeca of Rhynchosciara americana may be collected. If the enzymes restricted to this fluid are assayed in the presence of PM contents (PMC) their activities decrease by at least 58%. The lack of PM caused by calcofluor feeding impairs growth due to an increase in the metabolic cost associated with the conversion of food into body mass. This probably results from an increase in digestive enzyme excretion and useless homeostatic attempt to reestablish destroyed midgut gradients. The experimental models support the view that PM enhances digestive efficiency by: (a) prevention of non-specific binding of undigested material onto cell Surface; (b) prevention of excretion by allowing enzyme recycling powered by an ectoperitrophic counterflux of fluid; (c) removal from inside PM of the oligomeric molecules that may inhibit the enzymes involved in initial digestion; (d) restriction of oligomer hydrolases to ectoperitrophic space (ECS) to avoid probable partial inhibition by non-dispersed undigested food. Finally,PM functions are discussed regarding insects feeding on any diet. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Carra sawdust pretrated with formaldehyde was used to adsorb RR239 (reactive azo dye) at varying pH and zerovalent iron (ZVI) dosage. Modeling of kinetic results shows that sorption process is best described by the pseudo-second-order model. Batch experiments suggest that the decolorization efficiency was strongly enhanced with the presence of ZVI and low solution pH. The kinetics of dye sorption by mixed sorbent (5 g of sawdust and 180 mg of ZVI) at pH 2.0 was rapid, reaching more than 90% of the total discoloration in three minutes.
Resumo:
The acylation of three cellulose samples by acetic anhydride, Ac(2)O, in the solvent system LiCl/N,N-dimethylacetamide, DMAc (4 h, 110 A degrees C), has been revisited in order to investigate the dependence of the reaction efficiency on the structural characteristics of cellulose, and its aggregation in solution. The cellulose samples employed included microcrystalline, MCC; mercerized cotton linters, M-cotton, and mercerized sisal, M-sisal. The reaction efficiency expresses the relationship between the degree of substitution, DS, of the ester obtained, and the molar ratio Ac(2)O/AGU (anhydroglucose unit of the biopolymer); 100% efficiency means obtaining DS = 3 at Ac(2)O/AGU = 3. For all celluloses, the dependence of DS on Ac(2)O/AGU is described by an exponential decay equation: DS = DS(o) - Ae(-[(Ac2O/AGU)/B]); (A) and (B) are regression coefficients, and DS(o) is the calculated maximum degree of substitution, achieved under the conditions of each experiment. Values of (B) are clearly dependent on the cellulose employed: B((M-cotton)) > B((M-sisal)) > B((MCC)); they correlate qualitatively with the degree of polymerization of cellulose, and linearly with the aggregation number, N(agg), of the dissolved biopolymer, as calculated from static light scattering measurements: (B) = 1.709 + 0.034 N(agg). To our knowledge, this is the first report on the latter correlation; it shows the importance of the physical state of dissolved cellulose, and serves to explain, in part, the need to use distinct reaction conditions for MCC and fibrous celluloses, in particular Ac(2)O/AGU, time, temperature.
Resumo:
Different compositions of visible-light-curable triethylene glycol dimethacrylate/bisglycidyl methacrylate copolymers used in dental resin formulations were prepared through copolymerization photoinitiated by a camphorquinone/ethyl 4-dimethylaminobenzoate system irradiated with an Ultrablue IS light-emitting diode. The obtained copolymers were evaluated with differential scanning calorimetry. From the data for the heat of polymerization, before and after light exposure, obtained from exothermic differential scanning calorimetry curves, the light polymerization efficiency or degree of conversion of double bonds was calculated. The glass-transition temperature also was determined before and after photopolymerization. After the photopolymerization, the glass-transi-tion temperature was not well defined because of the breadth of the transition region associated with the properties of the photocured dimethacrylate. The glass-transition temperature after photopolymerization was determined experimentally and compared with the values determined with the Fox equation. In all mixtures, the experimental value was lower than the calculated value. Scanning electron microscopy was used to analyze the morphological differences in the prepared copolymer structures. (C) 2007 Wiley Periodicals, Inc.
Resumo:
This Thesis Work will concentrate on a very interesting problem, the Vehicle Routing Problem (VRP). In this problem, customers or cities have to be visited and packages have to be transported to each of them, starting from a basis point on the map. The goal is to solve the transportation problem, to be able to deliver the packages-on time for the customers,-enough package for each Customer,-using the available resources- and – of course - to be so effective as it is possible.Although this problem seems to be very easy to solve with a small number of cities or customers, it is not. In this problem the algorithm have to face with several constraints, for example opening hours, package delivery times, truck capacities, etc. This makes this problem a so called Multi Constraint Optimization Problem (MCOP). What’s more, this problem is intractable with current amount of computational power which is available for most of us. As the number of customers grow, the calculations to be done grows exponential fast, because all constraints have to be solved for each customers and it should not be forgotten that the goal is to find a solution, what is best enough, before the time for the calculation is up. This problem is introduced in the first chapter: form its basics, the Traveling Salesman Problem, using some theoretical and mathematical background it is shown, why is it so hard to optimize this problem, and although it is so hard, and there is no best algorithm known for huge number of customers, why is it a worth to deal with it. Just think about a huge transportation company with ten thousands of trucks, millions of customers: how much money could be saved if we would know the optimal path for all our packages.Although there is no best algorithm is known for this kind of optimization problems, we are trying to give an acceptable solution for it in the second and third chapter, where two algorithms are described: the Genetic Algorithm and the Simulated Annealing. Both of them are based on obtaining the processes of nature and material science. These algorithms will hardly ever be able to find the best solution for the problem, but they are able to give a very good solution in special cases within acceptable calculation time.In these chapters (2nd and 3rd) the Genetic Algorithm and Simulated Annealing is described in details, from their basis in the “real world” through their terminology and finally the basic implementation of them. The work will put a stress on the limits of these algorithms, their advantages and disadvantages, and also the comparison of them to each other.Finally, after all of these theories are shown, a simulation will be executed on an artificial environment of the VRP, with both Simulated Annealing and Genetic Algorithm. They will both solve the same problem in the same environment and are going to be compared to each other. The environment and the implementation are also described here, so as the test results obtained.Finally the possible improvements of these algorithms are discussed, and the work will try to answer the “big” question, “Which algorithm is better?”, if this question even exists.
Resumo:
Test is an area in system development. Test can be performed manually or automated. Test activities can be supported by Word documents and Excel sheets for documenting and executing test cases and as well for follow up, but there are also new test tools designed to support and facilitate the testing process and the activities of the test. This study has described manual test and identified strengths and weaknesses of manual testing with a testing tool called Microsoft Test Manager (MTM) and of manual testing using test cases and test log templates developed by the testers at Sogeti. The result that emerged from the problem and strength analysis and the analysis of literature studies and firsthand experiences (in terms of creating, documenting and executing test cases) addresses the issue of the following weaknesses and strengths. Strengths of the test tool is that it contains needed functionality all in one place and it is available when needed without having to open up other programs which saves many steps of activity. Strengths with test without the support of test tools is mainly that it is easy to learn and gives a good overview, easy to format text as desired and flexible to changes during execution of a test case. Weaknesses in test with the support of test tools include that it is difficult to get a good overview of the entire test case, that it is not possible to format the text in the test steps. It is as well not possible to modify the test steps during execution. It is also difficult to use some of the test design techniques of TMap, for example a checklist, when using the test tool MTM. Weaknesses with test without the support of the testing tool MTM is that the tester gets many more steps of activities to do compared to doing the same activities with the support of the testing tool MTM. There is more to remember because the documents the tester use are not directly linked. Altogether the strengths of the test tool stands out when it comes to supporting the testing process.
Resumo:
This report describes the work done creating a computer model of a kombi tank from Consolar. The model was created with Presim/Trnsys and Fittrn and DF were used to identify the parameters. Measurements were carried out and were used to identify the values of the parameters in the model. The identifications were first done for every circuit separately. After that, all parameters are normally identified together using all the measurements. Finally the model should be compared with other measurements, preferable realistic ones. The two last steps have not yet been carried out, because of problems finding a good model for the domestic hot water circuit.The model of the domestic hot water circuit give relatively good results for low flows at 5 l/min, but is not good for higher flows. In the report suggestions for improving the model are given. However, there was not enough time to test this within the project as much time was spent trying to solve problems with the model crashing. Suggestions for improving the model for the domestic circuit are given in chapter 4.4. The improved equations that are to be used in the improved model are given by equation 4.18, 4.19 and 4.22.Also for the boiler circuit and the solar circuit there are improvements that can be done. The model presented here has a few shortcomings, but with some extra work, an improved model can be created. In the attachment (Bilaga 1) is a description of the used model and all the identified parameters.A qualitative assessment of the store was also performed based on the measurements and the modelling carried out. The following summary of this can be given: Hot Water PreparationThe principle for controlling the flow on the primary side seems to work well in order to achieve good stratification. Temperatures in the bottom of the store after a short use of hot water, at a coldwater temperature of 12°C, was around 28-30°C. This was almost independent of the temperature in the store and the DHW-flow.The measured UA-values of the heat exchangers are not very reliable, but indicates that the heat transfer rates are much better than for the Conus 500, and in the same range as for other stores tested at SERC.The function of the mixing valve is not perfect (see diagram 4.3, where Tout1 is the outlet hot water temperature, and Tdhwo and Tdhw1 is the inlet temperature to the hot and cold side of the valve respectively). The outlet temperature varies a lot with different temperatures in the storage and is going down from 61°C to 47°C before the cold port is fully closed. This gives a problem to find a suitable temperature setting and gives also a risk that the auxiliary heating is increased instead of the set temperature of the valve, when the hot water temperature is to low.Collector circuitThe UA-value of the collector heat exchanger is much higher than the value for Conus 500, and in the same range as the heat exchangers in other stores tested at SERC.Boiler circuitThe valve in the boiler circuit is used to supply water from the boiler at two different heights, depending on the temperature of the water. At temperatures from the boiler above 58.2°C, all the water is injected to the upper inlet. At temperatures below 53.9°C all the water is injected to the lower inlet. At 56°C the water flow is equally divided between the two inlets. Detailed studies of the behaviour at the upper inlet shows that better accuracy of the model would have been achieved using three double ports in the model instead of two. The shape of the upper inlet makes turbulence, that could be modelled using two different inlets. Heat lossesThe heat losses per m3 are much smaller for the Solus 1050, than for the Conus 500 Storage. However, they are higher than those for some good stores tested at SERC. The pipes that are penetrating the insulation give air leakage and cold bridges, which could be a major part of the losses from the storage. The identified losses from the bottom of the storage are exceptionally high, but have less importance for the heat losses, due to the lower temperatures in the bottom. High losses from the bottom can be caused by air leakage through the insulation at the pipe connections of the storage.
Resumo:
The need for heating and cooling in buildings constitutes a considerable part of the total energy use in a country and reducing this need is of outmost importance in order to reach national and international goals for reducing energy use and emissions. One important way of reaching these goals is to increase the proportion of renewable energy used for heating and cooling of buildings. Perhaps the largest obstacle with this is the often occurring mismatch between the availability of renewable energy and the need for heating or cooling, hindering this energy to be used directly. This is one of the problems that can be solved by using thermal energy storage (TES) in order to save the heat or cold from when it is available to when it is needed. This thesis is focusing on the combination of TES techniques and buildings to achieve increased energy efficiency for heating and cooling. Various techniques used for TES as well as the combination of TES in buildings have been investigated and summarized through an extensive literature review. A survey of the Swedish building stock was also performed in order to define building types common in Sweden. Within the scope of this thesis, the survey resulted in the selection of three building types, two single family houses and one office building, out of which the two residential buildings were used in a simulation case study of passive TES with increased thermal mass (both sensible and latent). The second case study presented in the thesis is an evaluation of an existing seasonal borehole storage of solar heat for a residential community. In this case, real measurement data was used in the evaluation and in comparisons with earlier evaluations. The literature reviews showed that using TES opens up potential for reduced energy demand and reduced peak heating and cooling loads as well as possibilities for an increased share of renewable energy to cover the energy demand. By using passive storage through increased thermal mass of a building it is also possible to reduce variations in the indoor temperature and especially reduce excess temperatures during warm periods, which could result in avoiding active cooling in a building that would otherwise need it. The analysis of the combination of TES and building types confirmed that TES has a significant potential for increased energy efficiency in buildings but also highlighted the fact that there is still much research required before some of the technologies can become commercially available. In the simulation case study it was concluded that only a small reduction in heating demand is possible with increased thermal mass, but that the time with indoor temperatures above 24 °C can be reduced by up to 20%. The case study of the borehole storage system showed that although the storage system worked as planned, heat losses in the rest of the system as well as some problems with the system operation resulted in a lower solar fraction than projected. The work presented within this thesis has shown that TES is already used successfully for many building applications (e.g. domestic hot water stores and water tanks for storing solar heat) but that there still is much potential in further use of TES. There are, however, barriers such as a need for more research for some storage technologies as well as storage materials, especially phase change material storage and thermochemical storage.
Resumo:
This paper is concerned with the cost efficiency in achieving the Swedish national air quality objectives under uncertainty. To realize an ecologically sustainable society, the parliament has approved a set of interim and long-term pollution reduction targets. However, there are considerable quantification uncertainties on the effectiveness of the proposed pollution reduction measures. In this paper, we develop a multivariate stochastic control framework to deal with the cost efficiency problem with multiple pollutants. Based on the cost and technological data collected by several national authorities, we explore the implications of alternative probabilistic constraints. It is found that a composite probabilistic constraint induces considerably lower abatement cost than separable probabilistic restrictions. The trend is reinforced by the presence of positive correlations between reductions in the multiple pollutants.
Resumo:
A customer is presumed to gravitate to a facility by the distance to it and the attractiveness of it. However regarding the location of the facility, the presumption is that the customer opts for the shortest route to the nearest facility.This paradox was recently solved by the introduction of the gravity p-median model. The model is yet to be implemented and tested empirically. We implemented the model in an empirical problem of locating locksmiths, vehicle inspections, and retail stores ofv ehicle spare-parts, and we compared the solutions with those of the p-median model. We found the gravity p-median model to be of limited use for the problem of locating facilities as it either gives solutions similar to the p-median model, or it gives unstable solutions due to a non-concave objective function.
Resumo:
Neste trabalho estuda-se o processo de flotação por ar dissolvido (FAD) como método de remoção de óleos emulsificados. Realizou-se uma revisão bibliográfica tanto do processo de flotação como dos fundamentos físico-químicos envolvidos na estabilidade e flotação de emulsões. Foram realizados testes de estabilidade de emulsões, de FAD descontínuos e contínuos, utilizando-se emulsões de 1000 mg/l de Óleo diesel e heptano. Verificou-se a eficiência do sulfato de alumínio e de compostos poliméricos como agentes desestabilizantes. Estudou-se a influência do tipo e concentração dos agentes desestabilizantes, pH da emulsão, do tipo de água e agitação nos testes de estabilidade como também na flotação por ar dissolvido. Os resultados obtidos indicaram a viabilidade técnica da aplicação do processo para a clarificação de efluentes oleosos atingindo valores de turbidez residual satisfatórios (menores que 5 UNT). O sulfato de alumínio mostrou-se mais eficiente como agente desestabilizante de ambas emulsões nos estudos de estabilidade e testes de PAD descontínuos, mas o polímero catiônico C-505 apresentou melhores resultados nos testes de flotação contínuos. O presente trabalho ainda apresenta um estudo de dimensionamento e do custo da unidade de FAD para remoção de óleos emulsificados que leva em consideração os custos de investimentos e de operação da unidade. Verifica-se que o custo total da unidade depende da vazão de alimentação, taxa de recirculação, taxa ar/sólido em massa e concentração inicial de óleos. Conclui-se que para as taxas de ar/sólido usados na indústria (geralmente menores que 0,1) o custo operacional do reagente é o parâmetro determinante do custo total atualizado da unidade. Isto faz com que o custo total atualizado seja essencialmente independente da relação taxa de aplicação - taxa ar/sólido.
Resumo:
Electronic applications are currently developed under the reuse-based paradigm. This design methodology presents several advantages for the reduction of the design complexity, but brings new challenges for the test of the final circuit. The access to embedded cores, the integration of several test methods, and the optimization of the several cost factors are just a few of the several problems that need to be tackled during test planning. Within this context, this thesis proposes two test planning approaches that aim at reducing the test costs of a core-based system by means of hardware reuse and integration of the test planning into the design flow. The first approach considers systems whose cores are connected directly or through a functional bus. The test planning method consists of a comprehensive model that includes the definition of a multi-mode access mechanism inside the chip and a search algorithm for the exploration of the design space. The access mechanism model considers the reuse of functional connections as well as partial test buses, cores transparency, and other bypass modes. The test schedule is defined in conjunction with the access mechanism so that good trade-offs among the costs of pins, area, and test time can be sought. Furthermore, system power constraints are also considered. This expansion of concerns makes it possible an efficient, yet fine-grained search, in the huge design space of a reuse-based environment. Experimental results clearly show the variety of trade-offs that can be explored using the proposed model, and its effectiveness on optimizing the system test plan. Networks-on-chip are likely to become the main communication platform of systemson- chip. Thus, the second approach presented in this work proposes the reuse of the on-chip network for the test of the cores embedded into the systems that use this communication platform. A power-aware test scheduling algorithm aiming at exploiting the network characteristics to minimize the system test time is presented. The reuse strategy is evaluated considering a number of system configurations, such as different positions of the cores in the network, power consumption constraints and number of interfaces with the tester. Experimental results show that the parallelization capability of the network can be exploited to reduce the system test time, whereas area and pin overhead are strongly minimized. In this manuscript, the main problems of the test of core-based systems are firstly identified and the current solutions are discussed. The problems being tackled by this thesis are then listed and the test planning approaches are detailed. Both test planning techniques are validated for the recently released ITC’02 SoC Test Benchmarks, and further compared to other test planning methods of the literature. This comparison confirms the efficiency of the proposed methods.