935 resultados para Upper bound method


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work aimed to apply genetic algorithms (GA) and particle swarm optimization (PSO) in cash balance management using Miller-Orr model, which consists in a stochastic model that does not define a single ideal point for cash balance, but an oscillation range between a lower bound, an ideal balance and an upper bound. Thus, this paper proposes the application of GA and PSO to minimize the Total Cost of cash maintenance, obtaining the parameter of the lower bound of the Miller-Orr model, using for this the assumptions presented in literature. Computational experiments were applied in the development and validation of the models. The results indicated that both the GA and PSO are applicable in determining the cash level from the lower limit, with best results of PSO model, which had not yet been applied in this type of problem.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The thesis consists of three independent parts. Part I: Polynomial amoebas We study the amoeba of a polynomial, as de ned by Gelfand, Kapranov and Zelevinsky. A central role in the treatment is played by a certain convex function which is linear in each complement component of the amoeba, which we call the Ronkin function. This function is used in two di erent ways. First, we use it to construct a polyhedral complex, which we call a spine, approximating the amoeba. Second, the Monge-Ampere measure of the Ronkin function has interesting properties which we explore. This measure can be used to derive an upper bound on the area of an amoeba in two dimensions. We also obtain results on the number of complement components of an amoeba, and consider possible extensions of the theory to varieties of codimension higher than 1. Part II: Differential equations in the complex plane We consider polynomials in one complex variable arising as eigenfunctions of certain differential operators, and obtain results on the distribution of their zeros. We show that in the limit when the degree of the polynomial approaches innity, its zeros are distributed according to a certain probability measure. This measure has its support on the union of nitely many curve segments, and can be characterized by a simple condition on its Cauchy transform. Part III: Radon transforms and tomography This part is concerned with different weighted Radon transforms in two dimensions, in particular the problem of inverting such transforms. We obtain stability results of this inverse problem for rather general classes of weights, including weights of attenuation type with data acquisition limited to a 180 degrees range of angles. We also derive an inversion formula for the exponential Radon transform, with the same restriction on the angle.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

[EN]Often some interesting or simply curious points are left out when developing a theory. It seems that one of them is the existence of an upper bound for the fraction of area of a convex and closed plane area lying outside a circle with which it shares a diameter, a problem stemming from the theory of isoperimetric inequalities. In this paper such a bound is constructed and shown to be attained for a particular area. It is also shown that convexity is a necessary condition in order to avoid the whole area lying outside the circle

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A regional envelope curve (REC) of flood flows summarises the current bound on our experience of extreme floods in a region. RECs are available for most regions of the world. Recent scientific papers introduced a probabilistic interpretation of these curves and formulated an empirical estimator of the recurrence interval T associated with a REC, which, in principle, enables us to use RECs for design purposes in ungauged basins. The main aim of this work is twofold. First, it extends the REC concept to extreme rainstorm events by introducing the Depth-Duration Envelope Curves (DDEC), which are defined as the regional upper bound on all the record rainfall depths at present for various rainfall duration. Second, it adapts the probabilistic interpretation proposed for RECs to DDECs and it assesses the suitability of these curves for estimating the T-year rainfall event associated with a given duration and large T values. Probabilistic DDECs are complementary to regional frequency analysis of rainstorms and their utilization in combination with a suitable rainfall-runoff model can provide useful indications on the magnitude of extreme floods for gauged and ungauged basins. The study focuses on two different national datasets, the peak over threshold (POT) series of rainfall depths with duration 30 min., 1, 3, 9 and 24 hrs. obtained for 700 Austrian raingauges and the Annual Maximum Series (AMS) of rainfall depths with duration spanning from 5 min. to 24 hrs. collected at 220 raingauges located in northern-central Italy. The estimation of the recurrence interval of DDEC requires the quantification of the equivalent number of independent data which, in turn, is a function of the cross-correlation among sequences. While the quantification and modelling of intersite dependence is a straightforward task for AMS series, it may be cumbersome for POT series. This paper proposes a possible approach to address this problem.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Membrane-based separation processes are acquiring, in the last years, an increasing importance because of their intrinsic energetic and environmental sustainability: some types of polymeric materials, showing adequate perm-selectivity features, appear rather suitable for these applications, because of their relatively low cost and easy processability. In this work have been studied two different types of polymeric membranes, in view of possible applications to the gas separation processes, i.e. Mixed Matrix Membranes (MMMs) and high free volume glassy polymers. Since the early 90’s, it has been understood that the performances of polymeric materials in the field of gas separations show an upper bound in terms of permeability and selectivity: in particular, an increase of permeability is often accompanied by a decrease of selectivity and vice-versa, while several inorganic materials, like zeolites or silica derivates, can overcome this limitation. As a consequence, it has been developed the idea of dispersing inorganic particles in polymeric matrices, in order to obtain membranes with improved perm-selectivity features. In particular, dispersing fumed silica nanoparticles in high free volume glassy polymers improves in all the cases gases and vapours permeability, while the selectivity may either increase or decrease, depending upon material and gas mixture: that effect is due to the capacity of nanoparticles to disrupt the local chain packing, increasing the dimensions of excess free volume elements trapped in the polymer matrix. In this work different kinds of MMMs were fabricated using amorphous Teflon® AF or PTMSP and fumed silica: in all the cases, a considerable increase of solubility, diffusivity and permeability of gases and vapours (n-alkanes, CO2, methanol) was observed, while the selectivity shows a non-monotonous trend with filler fraction. Moreover, the classical models for composites are not able to capture the increase of transport properties due to the silica addition, so it has been necessary to develop and validate an appropriate thermodynamic model that allows to predict correctly the mass transport features of MMMs. In this work, another material, called poly-trimethylsilyl-norbornene (PTMSN) was examined: it is a new generation high free volume glassy polymer that, like PTMSP, shows unusual high permeability and selectivity levels to the more condensable vapours. These two polymer differ each other because PTMSN shows a more pronounced chemical stability, due to its structure double-bond free. For this polymer, a set of Lattice Fluid parameters was estimated, making possible a comparison between experimental and theoretical solubility isotherms for hydrocarbons and alcoholic vapours: the successfully modelling task, based on application of NELF model, offers a reliable alternative to direct sorption measurement, which is extremely time-consuming due to the relevant relaxation phenomena showed by each sorption step. For this material also dilation experiments were performed, in order to quantify its dimensional stability in presence of large size, swelling vapours.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present new algorithms to approximate the discrete volume of a polyhedral geometry using boxes defined by the US standard SAE J1100. This problem is NP-hard and has its main application in the car design process. The algorithms produce maximum weighted independent sets on a so-called conflict graph for a discretisation of the geometry. We present a framework to eliminate a large portion of the vertices of a graph without affecting the quality of the optimal solution. Using this framework we are also able to define the conflict graph without the use of a discretisation. For the solution of the maximum weighted independent set problem we designed an enumeration scheme which uses the restrictions of the SAE J1100 standard for an efficient upper bound computation. We evaluate the packing algorithms according to the solution quality compared to manually derived results. Finally, we compare our enumeration scheme to several other exact algorithms in terms of their runtime. Grid-based packings either tend to be not tight or have intersections between boxes. We therefore present an algorithm which can compute box packings with arbitrary placements and fixed orientations. In this algorithm we make use of approximate Minkowski Sums, computed by uniting many axis-oriented equal boxes. We developed an algorithm which computes the union of equal axis-oriented boxes efficiently. This algorithm also maintains the Minkowski Sums throughout the packing process. We also extend these algorithms for packing arbitrary objects in fixed orientations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The subject of the presented thesis is the accurate measurement of time dilation, aiming at a quantitative test of special relativity. By means of laser spectroscopy, the relativistic Doppler shifts of a clock transition in the metastable triplet spectrum of ^7Li^+ are simultaneously measured with and against the direction of motion of the ions. By employing saturation or optical double resonance spectroscopy, the Doppler broadening as caused by the ions' velocity distribution is eliminated. From these shifts both time dilation as well as the ion velocity can be extracted with high accuracy allowing for a test of the predictions of special relativity. A diode laser and a frequency-doubled titanium sapphire laser were set up for antiparallel and parallel excitation of the ions, respectively. To achieve a robust control of the laser frequencies required for the beam times, a redundant system of frequency standards consisting of a rubidium spectrometer, an iodine spectrometer, and a frequency comb was developed. At the experimental section of the ESR, an automated laser beam guiding system for exact control of polarisation, beam profile, and overlap with the ion beam, as well as a fluorescence detection system were built up. During the first experiments, the production, acceleration and lifetime of the metastable ions at the GSI heavy ion facility were investigated for the first time. The characterisation of the ion beam allowed for the first time to measure its velocity directly via the Doppler effect, which resulted in a new improved calibration of the electron cooler. In the following step the first sub-Doppler spectroscopy signals from an ion beam at 33.8 %c could be recorded. The unprecedented accuracy in such experiments allowed to derive a new upper bound for possible higher-order deviations from special relativity. Moreover future measurements with the experimental setup developed in this thesis have the potential to improve the sensitivity to low-order deviations by at least one order of magnitude compared to previous experiments; and will thus lead to a further contribution to the test of the standard model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dissertation studies the geometric static problem of under-constrained cable-driven parallel robots (CDPRs) supported by n cables, with n ≤ 6. The task consists of determining the overall robot configuration when a set of n variables is assigned. When variables relating to the platform posture are assigned, an inverse geometric static problem (IGP) must be solved; whereas, when cable lengths are given, a direct geometric static problem (DGP) must be considered. Both problems are challenging, as the robot continues to preserve some degrees of freedom even after n variables are assigned, with the final configuration determined by the applied forces. Hence, kinematics and statics are coupled and must be resolved simultaneously. In this dissertation, a general methodology is presented for modelling the aforementioned scenario with a set of algebraic equations. An elimination procedure is provided, aimed at solving the governing equations analytically and obtaining a least-degree univariate polynomial in the corresponding ideal for any value of n. Although an analytical procedure based on elimination is important from a mathematical point of view, providing an upper bound on the number of solutions in the complex field, it is not practical to compute these solutions as it would be very time-consuming. Thus, for the efficient computation of the solution set, a numerical procedure based on homotopy continuation is implemented. A continuation algorithm is also applied to find a set of robot parameters with the maximum number of real assembly modes for a given DGP. Finally, the end-effector pose depends on the applied load and may change due to external disturbances. An investigation into equilibrium stability is therefore performed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Gas separation membranes of high CO2 permeability and selectivity have great potential in both natural gas sweetening and carbon dioxide capture. Many modified PIM membranes results permselectivity above Robinson upper bound. The big problem that should be solved for these polymers to be commercialized is their aging through time. In high glassy polymeric membrane such as PIM-1 and its modifications, solubility selectivity has more contribution towards permselectivity than diffusivity selectivity. So in this thesis work pure and mixed gas sorption behavior of carbon dioxide and methane in three PIM-based membranes (PIM-1, TZPIM-1 and AO-PIM-1) and Polynonene membrane is rigorously studied. Sorption experiment is performed at different temperatures and molar fraction. Sorption isotherms found from the experiment shows that there is a decrease of solubility as the temperature of the experiment increases for both gases in all polymers. There is also a decrease of solubility due to the presence of the other gas in the system in the mixed gas experiments due to competitive sorption effect. Variation of solubility is more visible in methane sorption than carbon dioxide, which will make the mixed gas solubility selectivity higher than that of pure gas solubility selectivity. Modeling of the system using NELF and Dual mode sorption model estimates the experimental results correctly Sorption of gases in heat treated and untreated membranes show that the sorption isotherms don’t vary due to the application of heat treatment for both carbon dioxide and methane. But there is decrease in the diffusivity coefficient and permeability of pure gases due to heat treatment. Both diffusivity coefficient and permeability decreases with increasing of heat treatment temperature. Diffusivity coefficient calculated from transient sorption experiment and steady state permeability experiment is also compared in this thesis work. The results reveal that transient diffusivity coefficient is higher than steady state diffusivity selectivity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Sovereign ratings have only recently regained attention in the academic debate. This seems to be somewhat surprising against the background that their influence is well-known and that rating decisions have often been criticized in the past (as for example during the Asian crisis in the 90s). Sovereign ratings do not only assess the creditworthiness of governments: They are also included in the calculation of ratings for sub-sovereign issuers whereby their rating is usually restricted to the upper bound of the sovereign rating (sovereign ceiling). Earlier studies have also shown that the downgrade of a sovereign often leads to contagion effects on neighbor countries. This study focuses first on misleading incentives in the rating industry before chapter three summarizes the literature on the influence and determinants of sovereign ratings. The fourth chapter explores empirically how ratings respond to changes in sovereign debt across specific country groups. The fifth part focuses on single rating decisions of four selected rating agencies and investigates whether the timing of decisions gives reason for herding behavior. The final chapter presents a reform proposal for the future regulation of the rating industry in light of the aforementioned flaws.rn

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Random access (RA) protocols are normally used in a satellite networks for initial terminal access and are particularly effective since no coordination is required. On the other hand, contention resolution diversity slotted Aloha (CRDSA), irregular repetition slotted Aloha (IRSA) and coded slotted Aloha (CSA) has shown to be more efficient than classic RA schemes as slotted Aloha, and can be exploited also when short packets transmissions are done over a shared medium. In particular, they relies on burst repetition and on successive interference cancellation (SIC) applied at the receiver. The SIC process can be well described using a bipartite graph representation and exploiting tools used for analyze iterative decoding. The scope of my Master Thesis has been to described the performance of such RA protocols when the Rayleigh fading is taken into account. In this context, each user has the ability to correctly decode a packet also in presence of collision and when SIC is considered this may result in multi-packet reception. Analysis of the SIC procedure under Rayleigh fading has been analytically derived for the asymptotic case (infinite frame length), helping the analysis of both throughput and packet loss rates. An upper bound of the achievable performance has been analytically obtained. It can be show that in particular channel conditions the throughput of the system can be greater than one packets per slot which is the theoretical limit of the Collision Channel case.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In 1983, M. van den Berg made his Fundamental Gap Conjecture about the difference between the first two Dirichlet eigenvalues (the fundamental gap) of any convex domain in the Euclidean plane. Recently, progress has been made in the case where the domains are polygons and, in particular, triangles. We examine the conjecture for triangles in hyperbolic geometry, though we seek an for an upper bound for the fundamental gap rather than a lower bound.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Intermediaries permeate modern economic exchange. Most classical models on intermediated exchange are driven by information asymmetry and inventory management. These two factors are of reduced significance in modern economies. This makes it necessary to develop models that correspond more closely to modern financial marketplaces. The goal of this dissertation is to propose and examine such models in a game theoretical context. The proposed models are driven by asymmetries in the goals of different market participants. Hedging pressure as one of the most critical aspects in the behavior of commercial entities plays a crucial role. The first market model shows that no equilibrium solution can exist in a market consisting of a commercial buyer, a commercial seller and a non-commercial intermediary. This indicates a clear economic need for non-commercial trading intermediaries: a direct trade from seller to buyer does not result in an equilibrium solution. The second market model has two distinct intermediaries between buyer and seller: a spread trader/market maker and a risk-neutral intermediary. In this model a unique, natural equilibrium solution is identified in which the supply-demand surplus is traded by the risk-neutral intermediary, whilst the market maker trades the remainder from seller to buyer. Since the market maker’s payoff for trading at the identified equilibrium price is zero, this second model does not provide any motivation for the market maker to enter the market. The third market model introduces an explicit transaction fee that enables the market maker to secure a positive payoff. Under certain assumptions on this transaction fee the equilibrium solution of the previous model applies and now also provides a financial motivation for the market maker to enter the market. If the transaction fee violates an upper bound that depends on supply, demand and riskaversity of buyer and seller, the market will be in disequilibrium.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A range of societal issues have been caused by fossil fuel consumption in the transportation sector in the United States (U.S.), including health related air pollution, climate change, the dependence on imported oil, and other oil related national security concerns. Biofuels production from various lignocellulosic biomass types such as wood, forest residues, and agriculture residues have the potential to replace a substantial portion of the total fossil fuel consumption. This research focuses on locating biofuel facilities and designing the biofuel supply chain to minimize the overall cost. For this purpose an integrated methodology was proposed by combining the GIS technology with simulation and optimization modeling methods. The GIS based methodology was used as a precursor for selecting biofuel facility locations by employing a series of decision factors. The resulted candidate sites for biofuel production served as inputs for simulation and optimization modeling. As a precursor to simulation or optimization modeling, the GIS-based methodology was used to preselect potential biofuel facility locations for biofuel production from forest biomass. Candidate locations were selected based on a set of evaluation criteria, including: county boundaries, a railroad transportation network, a state/federal road transportation network, water body (rivers, lakes, etc.) dispersion, city and village dispersion, a population census, biomass production, and no co-location with co-fired power plants. The simulation and optimization models were built around key supply activities including biomass harvesting/forwarding, transportation and storage. The built onsite storage served for spring breakup period where road restrictions were in place and truck transportation on certain roads was limited. Both models were evaluated using multiple performance indicators, including cost (consisting of the delivered feedstock cost, and inventory holding cost), energy consumption, and GHG emissions. The impact of energy consumption and GHG emissions were expressed in monetary terms to keep consistent with cost. Compared with the optimization model, the simulation model represents a more dynamic look at a 20-year operation by considering the impacts associated with building inventory at the biorefinery to address the limited availability of biomass feedstock during the spring breakup period. The number of trucks required per day was estimated and the inventory level all year around was tracked. Through the exchange of information across different procedures (harvesting, transportation, and biomass feedstock processing procedures), a smooth flow of biomass from harvesting areas to a biofuel facility was implemented. The optimization model was developed to address issues related to locating multiple biofuel facilities simultaneously. The size of the potential biofuel facility is set up with an upper bound of 50 MGY and a lower bound of 30 MGY. The optimization model is a static, Mathematical Programming Language (MPL)-based application which allows for sensitivity analysis by changing inputs to evaluate different scenarios. It was found that annual biofuel demand and biomass availability impacts the optimal results of biofuel facility locations and sizes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider the 2d XY Model with topological lattice actions, which are invariant against small deformations of the field configuration. These actions constrain the angle between neighbouring spins by an upper bound, or they explicitly suppress vortices (and anti-vortices). Although topological actions do not have a classical limit, they still lead to the universal behaviour of the Berezinskii-Kosterlitz-Thouless (BKT) phase transition — at least up to moderate vortex suppression. In the massive phase, the analytically known Step Scaling Function (SSF) is reproduced in numerical simulations. However, deviations from the expected universal behaviour of the lattice artifacts are observed. In the massless phase, the BKT value of the critical exponent ηc is confirmed. Hence, even though for some topological actions vortices cost zero energy, they still drive the standard BKT transition. In addition we identify a vortex-free transition point, which deviates from the BKT behaviour.