920 resultados para Heuristic Method of Decomposition
Resumo:
We o¤er an axiomatization of the serial cost-sharing method of Friedman and Moulin (1999). The key property in our axiom system is Group Demand Monotonicity, asking that when a group of agents raise their demands, not all of them should pay less.
Resumo:
Avec les nouvelles technologies des réseaux optiques, une quantité de données de plus en plus grande peut être transportée par une seule longueur d'onde. Cette quantité peut atteindre jusqu’à 40 gigabits par seconde (Gbps). Les flots de données individuels quant à eux demandent beaucoup moins de bande passante. Le groupage de trafic est une technique qui permet l'utilisation efficace de la bande passante offerte par une longueur d'onde. Elle consiste à assembler plusieurs flots de données de bas débit en une seule entité de données qui peut être transporté sur une longueur d'onde. La technique demultiplexage en longueurs d'onde (Wavelength Division Multiplexing WDM) permet de transporter plusieurs longueurs d'onde sur une même fibre. L'utilisation des deux techniques : WDM et groupage de trafic, permet de transporter une quantité de données de l'ordre de terabits par seconde (Tbps) sur une même fibre optique. La protection du trafic dans les réseaux optiques devient alors une opération très vitale pour ces réseaux, puisqu'une seule panne peut perturber des milliers d'utilisateurs et engendre des pertes importantes jusqu'à plusieurs millions de dollars à l'opérateur et aux utilisateurs du réseau. La technique de protection consiste à réserver une capacité supplémentaire pour acheminer le trafic en cas de panne dans le réseau. Cette thèse porte sur l'étude des techniques de groupage et de protection du trafic en utilisant les p-cycles dans les réseaux optiques dans un contexte de trafic dynamique. La majorité des travaux existants considère un trafic statique où l'état du réseau ainsi que le trafic sont donnés au début et ne changent pas. En plus, la majorité de ces travaux utilise des heuristiques ou des méthodes ayant de la difficulté à résoudre des instances de grande taille. Dans le contexte de trafic dynamique, deux difficultés majeures s'ajoutent aux problèmes étudiés, à cause du changement continuel du trafic dans le réseau. La première est due au fait que la solution proposée à la période précédente, même si elle est optimisée, n'est plus nécessairement optimisée ou optimale pour la période courante, une nouvelle optimisation de la solution au problème est alors nécessaire. La deuxième difficulté est due au fait que la résolution du problème pour une période donnée est différente de sa résolution pour la période initiale à cause des connexions en cours dans le réseau qui ne doivent pas être trop dérangées à chaque période de temps. L'étude faite sur la technique de groupage de trafic dans un contexte de trafic dynamique consiste à proposer différents scénarios pour composer avec ce type de trafic, avec comme objectif la maximisation de la bande passante des connexions acceptées à chaque période de temps. Des formulations mathématiques des différents scénarios considérés pour le problème de groupage sont proposées. Les travaux que nous avons réalisés sur le problème de la protection considèrent deux types de p-cycles, ceux protégeant les liens (p-cycles de base) et les FIPP p-cycles (p-cycles protégeant les chemins). Ces travaux ont consisté d’abord en la proposition de différents scénarios pour gérer les p-cycles de protection dans un contexte de trafic dynamique. Ensuite, une étude sur la stabilité des p-cycles dans un contexte de trafic dynamique a été faite. Des formulations de différents scénarios ont été proposées et les méthodes de résolution utilisées permettent d’aborder des problèmes de plus grande taille que ceux présentés dans la littérature. Nous nous appuyons sur la méthode de génération de colonnes pour énumérer implicitement les cycles les plus prometteurs. Dans l'étude des p-cycles protégeant les chemins ou FIPP p-cycles, nous avons proposé des formulations pour le problème maître et le problème auxiliaire. Nous avons utilisé une méthode de décomposition hiérarchique du problème qui nous permet d'obtenir de meilleurs résultats dans un temps raisonnable. Comme pour les p-cycles de base, nous avons étudié la stabilité des FIPP p-cycles dans un contexte de trafic dynamique. Les travaux montrent que dépendamment du critère d'optimisation, les p-cycles de base (protégeant les liens) et les FIPP p-cycles (protégeant les chemins) peuvent être très stables.
Resumo:
Vapour phase methylation of phenol is carried out over La2O3 supported vanadia systems of various composition. The structural features and physico chemical characterisation of the catalysts are investigated. Orthovanadates are formed in addition to surface vanadyl species on the metal oxide support. No V2O5 crystallites are detected. The acid base properties of the oxides are studied by Hammett indicator method and decomposition of cyclohexanol.The data are correlated with the catalytic activity and selectivity of the products. Ring alkylation is found to be predominant over these catalysts.
Resumo:
The origin of magnetic coupling in KNiF3 and K2 NiF4 is studied by means of an ab initio cluster model approach. By a detailed study of the mapping between eigenstates of the exact nonrelativistic and spin model Hamiltonians it is possible to obtain the magnetic coupling constant J and to compare ab initio cluster-model values with those resulting from ab initio periodic Hartree-Fock calculations. This comparison shows that J is strongly determined by two-body interactions; this is a surprising and unexpected result. The importance of the ligands surrounding the basic metal-ligand-metal interacting unit is reexamined by using two different partitions and the constrained space orbital variation method of analysis. This decomposition enables us to show that this effect is basically environmental. Finally, dynamical electronic correlation effects have found to be critical in determining the final value of the magnetic coupling constant.
Resumo:
While channel coding is a standard method of improving a system’s energy efficiency in digital communications, its practice does not extend to high-speed links. Increasing demands in network speeds are placing a large burden on the energy efficiency of high-speed links and render the benefit of channel coding for these systems a timely subject. The low error rates of interest and the presence of residual intersymbol interference (ISI) caused by hardware constraints impede the analysis and simulation of coded high-speed links. Focusing on the residual ISI and combined noise as the dominant error mechanisms, this paper analyses error correlation through concepts of error region, channel signature, and correlation distance. This framework provides a deeper insight into joint error behaviours in high-speed links, extends the range of statistical simulation for coded high-speed links, and provides a case against the use of biased Monte Carlo methods in this setting
Resumo:
Optimum conditions for the preparation of tape recording quality Y-Fe20 s by the thermal decomposition of ferrous oxalate dihydrate have been established. Formation of the intermediate F%O~ which is most important in forming Y-FezO 3 takes place only in the presence of water vapour. Various stages of decomposition have been characterised by DTA, TG, DTG, and x-ray powder diffraction. The method for the preparation of acicular "Y-Fe208 that matches very well with the commercial tape recording material has been developed
Resumo:
Optimum conditions and experimental details for the formation of v-Fe203 from goethite have been worked out. In another method, a cheap complexing medium of starch was employed for precipitating acicular ferrous oxalate, which on decomposition in nitrogen and subsequent oxidation yielded acicular y-Fe203. On the basis of thermal decomposition in dry and moist nitrogen, DTA, XRD, GC and thermodynamic arguments, the mechanism of decomposition was elucidated. New materials obtained by doping ~'-Fe203 with 1-16 atomic percent magnesium, cobalt, nickel and copper, were synthesised and characterized
Resumo:
Modeling nonlinear systems using Volterra series is a century old method but practical realizations were hampered by inadequate hardware to handle the increased computational complexity stemming from its use. But interest is renewed recently, in designing and implementing filters which can model much of the polynomial nonlinearities inherent in practical systems. The key advantage in resorting to Volterra power series for this purpose is that nonlinear filters so designed can be made to work in parallel with the existing LTI systems, yielding improved performance. This paper describes the inclusion of a quadratic predictor (with nonlinearity order 2) with a linear predictor in an analog source coding system. Analog coding schemes generally ignore the source generation mechanisms but focuses on high fidelity reconstruction at the receiver. The widely used method of differential pnlse code modulation (DPCM) for speech transmission uses a linear predictor to estimate the next possible value of the input speech signal. But this linear system do not account for the inherent nonlinearities in speech signals arising out of multiple reflections in the vocal tract. So a quadratic predictor is designed and implemented in parallel with the linear predictor to yield improved mean square error performance. The augmented speech coder is tested on speech signals transmitted over an additive white gaussian noise (AWGN) channel.
Resumo:
Los métodos disponibles para realizar análisis de descomposición que se pueden aplicar cuando los datos son completamente observados, no son válidos cuando la variable de interés es censurada. Esto puede explicar la escasez de este tipo de ejercicios considerando variables de duración, las cuales se observan usualmente bajo censura. Este documento propone un método del tipo Oaxaca-Blinder para descomponer diferencias en la media en el contexto de datos censurados. La validez de dicho método radica en la identificación y estimación de la distribución conjunta de la variable de duración y un conjunto de covariables. Adicionalmente, se propone un método más general que permite descomponer otros funcionales de interés como la mediana o el coeficiente de Gini, el cual se basa en la especificación de la función de distribución condicional de la variable de duración dado un conjunto de covariables. Con el fin de evaluar el desempeño de dichos métodos, se realizan experimentos tipo Monte Carlo. Finalmente, los métodos propuestos son aplicados para analizar las brechas de género en diferentes características de la duración del desempleo en España, tales como la duración media, la probabilidad de ser desempleado de largo plazo y el coeficiente de Gini. Los resultados obtenidos permiten concluir que los factores diferentes a las características observables, tales como capital humano o estructura del hogar, juegan un papel primordial para explicar dichas brechas.
Resumo:
The time-of-detection method for aural avian point counts is a new method of estimating abundance, allowing for uncertain probability of detection. The method has been specifically designed to allow for variation in singing rates of birds. It involves dividing the time interval of the point count into several subintervals and recording the detection history of the subintervals when each bird sings. The method can be viewed as generating data equivalent to closed capture–recapture information. The method is different from the distance and multiple-observer methods in that it is not required that all the birds sing during the point count. As this method is new and there is some concern as to how well individual birds can be followed, we carried out a field test of the method using simulated known populations of singing birds, using a laptop computer to send signals to audio stations distributed around a point. The system mimics actual aural avian point counts, but also allows us to know the size and spatial distribution of the populations we are sampling. Fifty 8-min point counts (broken into four 2-min intervals) using eight species of birds were simulated. Singing rate of an individual bird of a species was simulated following a Markovian process (singing bouts followed by periods of silence), which we felt was more realistic than a truly random process. The main emphasis of our paper is to compare results from species singing at (high and low) homogenous rates per interval with those singing at (high and low) heterogeneous rates. Population size was estimated accurately for the species simulated, with a high homogeneous probability of singing. Populations of simulated species with lower but homogeneous singing probabilities were somewhat underestimated. Populations of species simulated with heterogeneous singing probabilities were substantially underestimated. Underestimation was caused by both the very low detection probabilities of all distant individuals and by individuals with low singing rates also having very low detection probabilities.
Resumo:
We have investigated the adsorption and thermal decomposition of copper hexafluoroacetylacetonate (Cu-11(hfaC)(2)) on single crystal rutile TiO2(110). Low energy electron diffraction shows that room temperature saturation coverage of the Cu-II(hfac)(2) adsorbate forms an ordered (2 x 1) over-layer. X-ray and ultra-violet photoemission spectroscopy of the saturated surface were recorded as the sample was annealed in a sequential manner to reveal decomposition pathways. The results show that the molecule dissociatively adsorbs by detachment of one of the two ligands to form hfac and Cu-1(hfac) which chemisorb to the substrate at 298 K. These ligands only begin to decompose once the surface temperature exceeds 473 K where Cu core level shifts indicate metallisation. This reduction from Cu(I) to Cu(0) takes place in the absence of an external reducing agent and without disproportionation and is accompanied by the onset of decomposition of the hfac ligands. Finally, C K-edge near edge X-ray absorption fine structure experiments indicate that both the ligands adsorb aligned in the < 001 > direction and we propose a model in which the hfac ligands adsorb on the 5-fold coordinated Ti atoms and the Cu-1(hfac) moiety attaches to the bridging O atoms in a square planar geometry. The calculated tilt angle for these combined geometries is approximately 10 degrees to the surface normal.
Resumo:
A traditional method of validating the performance of a flood model when remotely sensed data of the flood extent are available is to compare the predicted flood extent to that observed. The performance measure employed often uses areal pattern-matching to assess the degree to which the two extents overlap. Recently, remote sensing of flood extents using synthetic aperture radar (SAR) and airborne scanning laser altimetry (LIDAR) has made more straightforward the synoptic measurement of water surface elevations along flood waterlines, and this has emphasised the possibility of using alternative performance measures based on height. This paper considers the advantages that can accrue from using a performance measure based on waterline elevations rather than one based on areal patterns of wet and dry pixels. The two measures were compared for their ability to estimate flood inundation uncertainty maps from a set of model runs carried out to span the acceptable model parameter range in a GLUE-based analysis. A 1 in 5-year flood on the Thames in 1992 was used as a test event. As is typical for UK floods, only a single SAR image of observed flood extent was available for model calibration and validation. A simple implementation of a two-dimensional flood model (LISFLOOD-FP) was used to generate model flood extents for comparison with that observed. The performance measure based on height differences of corresponding points along the observed and modelled waterlines was found to be significantly more sensitive to the channel friction parameter than the measure based on areal patterns of flood extent. The former was able to restrict the parameter range of acceptable model runs and hence reduce the number of runs necessary to generate an inundation uncertainty map. A result of this was that there was less uncertainty in the final flood risk map. The uncertainty analysis included the effects of uncertainties in the observed flood extent as well as in model parameters. The height-based measure was found to be more sensitive when increased heighting accuracy was achieved by requiring that observed waterline heights varied slowly along the reach. The technique allows for the decomposition of the reach into sections, with different effective channel friction parameters used in different sections, which in this case resulted in lower r.m.s. height differences between observed and modelled waterlines than those achieved by runs using a single friction parameter for the whole reach. However, a validation of the modelled inundation uncertainty using the calibration event showed a significant difference between the uncertainty map and the observed flood extent. While this was true for both measures, the difference was especially significant for the height-based one. This is likely to be due to the conceptually simple flood inundation model and the coarse application resolution employed in this case. The increased sensitivity of the height-based measure may lead to an increased onus being placed on the model developer in the production of a valid model
Resumo:
The absorption cross-sections of Cl2O6 and Cl2O4 have been obtained using a fast flow reactor with a diode array spectrometer (DAS) detection system. The absorption cross-sections at the wavelengths of maximum absorption (lambda(max)) determined in this study are those of Cl2O6: (1.47 +/- 0.15) x 10(-17) cm(2) molecule(-1), at lambda(max) = 276 nm and T = 298 K; and Cl2O4: (9.0 +/- 2.0) x 10(-19) cm(2) molecule(-1), at lambda(max) = 234 nm and T = 298 K. Errors quoted are two standard deviations together with estimates of the systematic error. The shapes of the absorption spectra were obtained over the wavelength range 200-450 nm for Cl2O6 and 200-350 nm for Cl2O4, and were normalized to the absolute cross-sections obtained at lambda(max) for each oxide, and are presented at 1 nm intervals. These data are discussed in relation to previous measurements. The reaction of O with OCIO has been investigated with the objective of observing transient spectroscopic absorptions. A transient absorption was seen, and the possibility is explored of identifying the species with the elusive sym-ClO3 or ClO4, both of which have been characterized in matrices, but not in the gas-phase. The photolysis of OCIO was also re-examined, with emphasis being placed on the products of reaction. UV absorptions attributable to one of the isomers of the ClO dimer, chloryl chloride (ClClO2) were observed; some Cl2O4 was also found at long photolysis times, when much of the ClClO2 had itself been photolysed. We suggest that reports of Cl2O6 formation in previous studies could be a consequence of a mistaken identification. At low temperatures, the photolysis of OCIO leads to the formation of Cl2O3 as a result of the addition of the ClO primary product to OCIO. ClClO2 also appears to be one product of the reaction between O-3 and OCIO, especially when the reaction occurs under explosive conditions. We studied the kinetics of the non-explosive process using a stopped-flow technique, and suggest a value for the room-temperature rate coefficient of (4.6 +/- 0.9) x 10(-19) cm(3) molecule(-1) s(-1) (limit quoted is 2sigma random errors). The photochemical and thermal decomposition of Cl2O6 is described in this paper. For photolysis at k = 254 nm, the removal of Cl2O6 is not accompanied by the build up of any other strong absorber. The implications of the results are either that the photolysis of Cl2O6 produces Cl-2 directly, or that the initial photofragments are converted rapidly to Cl-2. In the thermal decomposition of Cl2O6, Cl2O4 was shown to be a product of reaction, although not necessarily the major one. The kinetics of decomposition were investigated using the stopped-flow technique. At relatively high [OCIO] present in the system, the decay kinetics obeyed a first-order law, with a limiting first-order rate coefficient of 0.002 s(-1). (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
A finite-difference scheme based on flux difference splitting is presented for the solution of the two-dimensional shallow-water equations of ideal fluid flow. A linearised problem, analogous to that of Riemann for gasdynamics, is defined and a scheme, based on numerical characteristic decomposition, is presented for obtaining approximate solutions to the linearised problem. The method of upwind differencing is used for the resulting scalar problems, together with a flux limiter for obtaining a second-order scheme which avoids non-physical, spurious oscillations. An extension to the two-dimensional equations with source terms, is included. The scheme is applied to a dam-break problem with cylindrical symmetry.
Resumo:
Solutions of a two-dimensional dam break problem are presented for two tailwater/reservoir height ratios. The numerical scheme used is an extension of one previously given by the author [J. Hyd. Res. 26(3), 293–306 (1988)], and is based on numerical characteristic decomposition. Thus approximate solutions are obtained via linearised problems, and the method of upwind differencing is used for the resulting scalar problems, together with a flux limiter for obtaining a second order scheme which avoids non-physical, spurious oscillations.