966 resultados para Maximum entropy methods
Resumo:
Les noves tecnologies a la xarxa ens permeten transportar, cada cop més, grans volums d' informació i trànsit de xarxa amb diferents nivells de prioritat. En aquest escenari, on s'ofereix una millor qualitat de servei, les conseqüències d'una fallada en un enllaç o en un node esdevenen més importants. Multiprotocol Lavel Switching (MPLS), juntament amb l'extensió a MPLS generalitzat (GMPLS), proporcionen mecanismes ràpids de recuperació de fallada establint camins, Label Switch Path (LSPs), redundants per ser utilitzats com a camins alternatius. En cas de fallada podrem utilitzar aquests camins per redireccionar el trànsit. El principal objectiu d'aquesta tesi ha estat millorar alguns dels actuals mecanismes de recuperació de fallades MPLS/GMPLS, amb l'objectiu de suportar els requeriments de protecció dels serveis proporcionats per la nova Internet. Per tal de fer aquesta avaluació s'han tingut en compte alguns paràmetres de qualitat de protecció com els temps de recuperació de fallada, les pèrdues de paquets o el consum de recursos. En aquesta tesi presentem una completa revisió i comparació dels principals mètodes de recuperació de fallada basats en MPLS. Aquest anàlisi inclou els mètodes de protecció del camí (backups globals, backups inversos i protecció 1+1), els mètodes de protecció locals i els mètodes de protecció de segments. També s'ha tingut en compte l'extensió d'aquests mecanismes a les xarxes òptiques mitjançant el pla de control proporcionat per GMPLS. En una primera fase d'aquest treball, cada mètode de recuperació de fallades és analitzat sense tenir en compte restriccions de recursos o de topologia. Aquest anàlisi ens dóna una primera classificació dels millors mecanismes de protecció en termes de pèrdues de paquets i temps de recuperació. Aquest primer anàlisi no és aplicable a xarxes reals. Per tal de tenir en compte aquest nou escenari, en una segona fase, s'analitzen els algorismes d'encaminament on sí tindrem en compte aquestes limitacions i restriccions de la xarxa. Es presenten alguns dels principals algorismes d'encaminament amb qualitat de servei i alguna de les principals propostes d'encaminament per xarxes MPLS. La majoria dels actual algorismes d'encaminament no tenen en compte l'establiment de rutes alternatives o utilitzen els mateixos objectius per seleccionar els camins de treball i els de protecció. Per millorar el nivell de protecció introduïm i formalitzem dos nous conceptes: la Probabilitat de fallada de la xarxa i l'Impacte de fallada. Un anàlisi de la xarxa a nivell físic proporciona un primer element per avaluar el nivell de protecció en termes de fiabilitat i disponibilitat de la xarxa. Formalitzem l'impacte d'una fallada, quant a la degradació de la qualitat de servei (en termes de retard i pèrdues de paquets). Expliquem la nostra proposta per reduir la probabilitat de fallada i l'impacte de fallada. Per últim fem una nova definició i classificació dels serveis de xarxa segons els valors requerits de probabilitat de fallada i impacte. Un dels aspectes que destaquem dels resultats d'aquesta tesi és que els mecanismes de protecció global del camí maximitzen la fiabilitat de la xarxa, mentre que les tècniques de protecció local o de segments de xarxa minimitzen l'impacte de fallada. Per tant podem assolir mínim impacte i màxima fiabilitat aplicant protecció local a tota la xarxa, però no és una proposta escalable en termes de consum de recursos. Nosaltres proposem un mecanisme intermig, aplicant protecció de segments combinat amb el nostre model d'avaluació de la probabilitat de fallada. Resumint, aquesta tesi presenta diversos mecanismes per l'anàlisi del nivell de protecció de la xarxa. Els resultats dels models i mecanismes proposats milloren la fiabilitat i minimitzen l'impacte d'una fallada en la xarxa.
Resumo:
A study was conducted to estimate variation among laboratories and between manual and automated techniques of measuring pressure on the resulting gas production profiles (GPP). Eight feeds (molassed sugarbeet feed, grass silage, maize silage, soyabean hulls, maize gluten feed, whole crop wheat silage, wheat, glucose) were milled to pass a I mm screen and sent to three laboratories (ADAS Nutritional Sciences Research Unit, UK; Institute of Grassland and Environmental Research (IGER), UK; Wageningen University, The Netherlands). Each laboratory measured GPP over 144 h using standardised procedures with manual pressure transducers (MPT) and automated pressure systems (APS). The APS at ADAS used a pressure transducer and bottles in a shaking water bath, while the APS at Wageningen and IGER used a pressure sensor and bottles held in a stationary rack. Apparent dry matter degradability (ADDM) was estimated at the end of the incubation. GPP were fitted to a modified Michaelis-Menten model assuming a single phase of gas production, and GPP were described in terms of the asymptotic volume of gas produced (A), the time to half A (B), the time of maximum gas production rate (t(RM) (gas)) and maximum gas production rate (R-M (gas)). There were effects (P<0.001) of substrate on all parameters. However, MPT produced more (P<0.001) gas, but with longer (P<0.001) B and t(RM gas) (P<0.05) and lower (P<0.001) R-M gas compared to APS. There was no difference between apparatus in ADDM estimates. Interactions occurred between substrate and apparatus, substrate and laboratory, and laboratory and apparatus. However, when mean values for MPT were regressed from the individual laboratories, relationships were good (i.e., adjusted R-2 = 0.827 or higher). Good relationships were also observed with APS, although they were weaker than for MPT (i.e., adjusted R-2 = 0.723 or higher). The relationships between mean MPT and mean APS data were also good (i.e., adjusted R 2 = 0. 844 or higher). Data suggest that, although laboratory and method of measuring pressure are sources of variation in GPP estimation, it should be possible using appropriate mathematical models to standardise data among laboratories so that data from one laboratory could be extrapolated to others. This would allow development of a database of GPP data from many diverse feeds. (c) 2005 Published by Elsevier B.V.
Resumo:
Physical, cultural and biological methods for weed control have developed largely independently and are often concerned with weed control in different systems: physical and cultural control in annual crops and biocontrol in extensive grasslands. We discuss the strengths and limitations of four physical and cultural methods for weed control: mechanical, thermal, cutting, and intercropping, and the advantages and disadvantages of combining biological control with them. These physical and cultural control methods may increase soil nitrogen levels and alter microclimate at soil level; this may be of benefit to biocontrol agents, although physical disturbance to the soil and plant damage may be detrimental. Some weeds escape control by these methods; we suggest that these weeds may be controlled by biocontrol agents. It will be easiest to combine biological control with. re and cutting in grasslands; within arable systems it would be most promising to combine biological control (especially using seed predators and foliar pathogens) with cover-cropping, and mechanical weeding combined with foliar bacterial and possibly foliar fungal pathogens. We stress the need to consider the timing of application of combined control methods in order to cause least damage to the biocontrol agent, along with maximum damage to the weed and to consider the wider implications of these different weed control methods.
Resumo:
Dual Carrier Modulation (DCM) was chosen as the higher data rate modulation scheme for MB-OFDM (Multiband Orthogonal Frequency Division Multiplexing) in the UWB (Ultra-Wide Band) radio platform ECMA-368. ECMA-368 has been chosen as the physical implementation for high data rate Wireless USB (W-USB) and Bluetooth 3.0. In this paper, different demapping methods for the DCM demapper are presented, being Soft Bit, Maximum Likely (ML) Soft Bit and Log Likelihood Ratio (LLR). Frequency diversity and Channel State Information (CSI) are further techniques to enhance demapping methods. The system performance for those DCM demapping methods simulated in realistic multi-path environments are provided and compared.
Resumo:
This letter argues that the current controversy about whether Wbuoyancy, the power input due to the surface buoyancy fluxes, is large or small in the oceans stems from two distinct and incompatible views on how Wbuoyancy relates to the volume-integrated work of expansion/contraction B. The current prevailing view is that Wbuoyancy should be identified with the net value of B, which current theories estimate to be small. The alternative view, defended here, is that only the positive part of B, i.e., the one converting internal energy into mechanical energy, should enter the definition of Wbuoyancy, since the negative part of B is associated with the non-viscous dissipation of mechanical energy. Two indirect methods suggest that by contrast, the positive part of B is potentially large.
Resumo:
In financial decision-making processes, the adopted weights of the objective functions have significant impacts on the final decision outcome. However, conventional rating and weighting methods exhibit difficulty in deriving appropriate weights for complex decision-making problems with imprecise information. Entropy is a quantitative measure of uncertainty and has been useful in exploring weights of attributes in decision making. A fuzzy and entropy-based mathematical approach is employed to solve the weighting problem of the objective functions in an overall cash-flow model. The multiproject being undertaken by a medium-size construction firm in Hong Kong was used as a real case study to demonstrate the application of entropy. Its application in multiproject cash flow situations is demonstrated. The results indicate that the overall before-tax profit was HK$ 0.11 millions lower after the introduction of appropriate weights. In addition, the best time to invest in new projects arising from positive cash flow was identified to be two working months earlier than the nonweight system.
Resumo:
Airborne high resolution in situ measurements of a large set of trace gases including ozone (O3) and total water (H2O) in the upper troposphere and the lowermost stratosphere (UT/LMS) have been performed above Europe within the SPURT project. SPURT provides an extensive data coverage of the UT/LMS in each season within the time period between November 2001 and July 2003. In the LMS a distinct spring maximum and autumn minimum is observed in O3, whereas its annual cycle in the UT is shifted by 2–3 months later towards the end of the year. The more variable H2O measurements reveal a maximum during summer and a minimum during autumn/winter with no phase shift between the two atmospheric compartments. For a comprehensive insight into trace gas composition and variability in the UT/LMS several statistical methods are applied using chemical, thermal and dynamical vertical coordinates. In particular, 2-dimensional probability distribution functions serve as a tool to transform localised aircraft data to a more comprehensive view of the probed atmospheric region. It appears that both trace gases, O3 and H2O, reveal the most compact arrangement and are best correlated in the view of potential vorticity (PV) and distance to the local tropopause, indicating an advanced mixing state on these surfaces. Thus, strong gradients of PV seem to act as a transport barrier both in the vertical and the horizontal direction. The alignment of trace gas isopleths reflects the existence of a year-round extra-tropical tropopause transition layer. The SPURT measurements reveal that this layer is mainly affected by stratospheric air during winter/spring and by tropospheric air during autumn/summer. Normalised mixing entropy values for O3 and H2O in the LMS appear to be maximal during spring and summer, respectively, indicating highest variability of these trace gases during the respective seasons.
Resumo:
BIOME 6000 is an international project to map vegetation globally at mid-Holocene (6000 14C yr bp) and last glacial maximum (LGM, 18,000 14C yr bp), with a view to evaluating coupled climate-biosphere model results. Primary palaeoecological data are assigned to biomes using an explicit algorithm based on plant functional types. This paper introduces the second Special Feature on BIOME 6000. Site-based global biome maps are shown with data from North America, Eurasia (except South and Southeast Asia) and Africa at both time periods. A map based on surface samples shows the method’s skill in reconstructing present-day biomes. Cold and dry conditions at LGM favoured extensive tundra and steppe. These biomes intergraded in northern Eurasia. Northern hemisphere forest biomes were displaced southward. Boreal evergreen forests (taiga) and temperate deciduous forests were fragmented, while European and East Asian steppes were greatly extended. Tropical moist forests (i.e. tropical rain forest and tropical seasonal forest) in Africa were reduced. In south-western North America, desert and steppe were replaced by open conifer woodland, opposite to the general arid trend but consistent with modelled southward displacement of the jet stream. The Arctic forest limit was shifted slighly north at 6000 14C yr bp in some sectors, but not in all. Northern temperate forest zones were generally shifted greater distances north. Warmer winters as well as summers in several regions are required to explain these shifts. Temperate deciduous forests in Europe were greatly extended, into the Mediterranean region as well as to the north. Steppe encroached on forest biomes in interior North America, but not in central Asia. Enhanced monsoons extended forest biomes in China inland and Sahelian vegetation into the Sahara while the African tropical rain forest was also reduced, consistent with a modelled northward shift of the ITCZ and a more seasonal climate in the equatorial zone. Palaeobiome maps show the outcome of separate, independent migrations of plant taxa in response to climate change. The average composition of biomes at LGM was often markedly different from today. Refugia for the temperate deciduous and tropical rain forest biomes may have existed offshore at LGM, but their characteristic taxa also persisted as components of other biomes. Examples include temperate deciduous trees that survived in cool mixed forest in eastern Europe, and tropical evergreen trees that survived in tropical seasonal forest in Africa. The sequence of biome shifts during a glacial-interglacial cycle may help account for some disjunct distributions of plant taxa. For example, the now-arid Saharan mountains may have linked Mediterranean and African tropical montane floras during enhanced monsoon regimes. Major changes in physical land-surface conditions, shown by the palaeobiome data, have implications for the global climate. The data can be used directly to evaluate the output of coupled atmosphere-biosphere models. The data could also be objectively generalized to yield realistic gridded land-surface maps, for use in sensitivity experiments with atmospheric models. Recent analyses of vegetation-climate feedbacks have focused on the hypothesized positive feedback effects of climate-induced vegetation changes in the Sahara/Sahel region and the Arctic during the mid-Holocene. However, a far wider spectrum of interactions potentially exists and could be investigated, using these data, both for 6000 14C yr bp and for the LGM.
Resumo:
Palaeodata in synthesis form are needed as benchmarks for the Palaeoclimate Modelling Intercomparison Project (PMIP). Advances since the last synthesis of terrestrial palaeodata from the last glacial maximum (LGM) call for a new evaluation, especially of data from the tropics. Here pollen, plant-macrofossil, lake-level, noble gas (from groundwater) and δ18O (from speleothems) data are compiled for 18±2 ka (14C), 32 °N–33 °S. The reliability of the data was evaluated using explicit criteria and some types of data were re-analysed using consistent methods in order to derive a set of mutually consistent palaeoclimate estimates of mean temperature of the coldest month (MTCO), mean annual temperature (MAT), plant available moisture (PAM) and runoff (P-E). Cold-month temperature (MAT) anomalies from plant data range from −1 to −2 K near sea level in Indonesia and the S Pacific, through −6 to −8 K at many high-elevation sites to −8 to −15 K in S China and the SE USA. MAT anomalies from groundwater or speleothems seem more uniform (−4 to −6 K), but the data are as yet sparse; a clear divergence between MAT and cold-month estimates from the same region is seen only in the SE USA, where cold-air advection is expected to have enhanced cooling in winter. Regression of all cold-month anomalies against site elevation yielded an estimated average cooling of −2.5 to −3 K at modern sea level, increasing to ≈−6 K by 3000 m. However, Neotropical sites showed larger than the average sea-level cooling (−5 to −6 K) and a non-significant elevation effect, whereas W and S Pacific sites showed much less sea-level cooling (−1 K) and a stronger elevation effect. These findings support the inference that tropical sea-surface temperatures (SSTs) were lower than the CLIMAP estimates, but they limit the plausible average tropical sea-surface cooling, and they support the existence of CLIMAP-like geographic patterns in SST anomalies. Trends of PAM and lake levels indicate wet LGM conditions in the W USA, and at the highest elevations, with generally dry conditions elsewhere. These results suggest a colder-than-present ocean surface producing a weaker hydrological cycle, more arid continents, and arguably steeper-than-present terrestrial lapse rates. Such linkages are supported by recent observations on freezing-level height and tropical SSTs; moreover, simulations of “greenhouse” and LGM climates point to several possible feedback processes by which low-level temperature anomalies might be amplified aloft.
Resumo:
We give an a posteriori analysis of a semidiscrete discontinuous Galerkin scheme approximating solutions to a model of multiphase elastodynamics, which involves an energy density depending not only on the strain but also the strain gradient. A key component in the analysis is the reduced relative entropy stability framework developed in Giesselmann (2014, SIAM J. Math. Anal., 46, 3518–3539). This framework allows energy-type arguments to be applied to continuous functions. Since we advocate the use of discontinuous Galerkin methods we make use of two families of reconstructions, one set of discrete reconstructions and a set of elliptic reconstructions to apply the reduced relative entropy framework in this setting.
Resumo:
The weak-constraint inverse for nonlinear dynamical models is discussed and derived in terms of a probabilistic formulation. The well-known result that for Gaussian error statistics the minimum of the weak-constraint inverse is equal to the maximum-likelihood estimate is rederived. Then several methods based on ensemble statistics that can be used to find the smoother (as opposed to the filter) solution are introduced and compared to traditional methods. A strong point of the new methods is that they avoid the integration of adjoint equations, which is a complex task for real oceanographic or atmospheric applications. they also avoid iterative searches in a Hilbert space, and error estimates can be obtained without much additional computational effort. the feasibility of the new methods is illustrated in a two-layer quasigeostrophic model.
Resumo:
Three different types of maltodextrin encapsulated dehydrated blackberry fruit powders were obtained using vibrofluidized bed drying (VF), spray drying (SD), vacuum drying (VD), and freeze drying (FD). Moisture equilibrium data of blackberry pulp powders with 18% maltodextrin were determined at 20, 30, 40, and 50 degrees C using the static gravimetric method for the water activity range of 0.06-0.90. Experimental equilibrium moisture content data versus water activity were fit to the Guggenheim-Anderson-de Boer (GAB) model. Agreement was found between experimental and calculated values. The isosteric heat of sorption of water was determined using the Clausius-Clapeyron equation from the equilibrium data; isosteric heats of sorption were found to increase with increasing temperature and could be adjusted by an exponential relationship. For freeze dried, vibrofluidized, and vacuum dried pulp powder samples, the isosteric heats of sorption were lower (more negative) than those calculated for spray dried samples. The enthalpy-entropy compensation theory was applied to sorption isotherms and plots of Delta H versus Delta S provided the isokinetic temperatures, indicating an enthalpy-controlled sorption process.
Resumo:
Evidence of jet precession in many galactic and extragalactic sources has been reported in the literature. Much of this evidence is based on studies of the kinematics of the jet knots, which depends on the correct identification of the components to determine their respective proper motions and position angles on the plane of the sky. Identification problems related to fitting procedures, as well as observations poorly sampled in time, may influence the follow-up of the components in time, which consequently might contribute to a misinterpretation of the data. In order to deal with these limitations, we introduce a very powerful statistical tool to analyse jet precession: the cross-entropy method for continuous multi-extremal optimization. Only based on the raw data of the jet components (right ascension and declination offsets from the core), the cross-entropy method searches for the precession model parameters that better represent the data. In this work we present a large number of tests to validate this technique, using synthetic precessing jets built from a given set of precession parameters. With the aim of recovering these parameters, we applied the cross-entropy method to our precession model, varying exhaustively the quantities associated with the method. Our results have shown that even in the most challenging tests, the cross-entropy method was able to find the correct parameters within a 1 per cent level. Even for a non-precessing jet, our optimization method could point out successfully the lack of precession.
Resumo:
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.
Resumo:
Non-linear methods for estimating variability in time-series are currently of widespread use. Among such methods are approximate entropy (ApEn) and sample approximate entropy (SampEn). The applicability of ApEn and SampEn in analyzing data is evident and their use is increasing. However, consistency is a point of concern in these tools, i.e., the classification of the temporal organization of a data set might indicate a relative less ordered series in relation to another when the opposite is true. As highlighted by their proponents themselves, ApEn and SampEn might present incorrect results due to this lack of consistency. In this study, we present a method which gains consistency by using ApEn repeatedly in a wide range of combinations of window lengths and matching error tolerance. The tool is called volumetric approximate entropy, vApEn. We analyze nine artificially generated prototypical time-series with different degrees of temporal order (combinations of sine waves, logistic maps with different control parameter values, random noises). While ApEn/SampEn clearly fail to consistently identify the temporal order of the sequences, vApEn correctly do. In order to validate the tool we performed shuffled and surrogate data analysis. Statistical analysis confirmed the consistency of the method. (C) 2008 Elsevier Ltd. All rights reserved.