953 resultados para Minimum multiplicity
Resumo:
Ocean observations carried out in the framework of the Collaborative Research Center 754 (SFB 754) "Climate-Biogeochemistry Interactions in the Tropical Ocean" are used to study (1) the structure of tropical oxygen minimum zones (OMZs), (2) the processes that contribute to the oxygen budget, and (3) long-term changes in the oxygen distribution. The OMZ of the eastern tropical North Atlantic (ETNA), located between the well-ventilated subtropical gyre and the equatorial oxygen maximum, is composed of a deep OMZ at about 400 m depth with its core region centred at about 20° W, 10° N and a shallow OMZ at about 100 m depth with lowest oxygen concentrations in proximity to the coastal upwelling region off Mauritania and Senegal. The oxygen budget of the deep OMZ is given by oxygen consumption mainly balanced by the oxygen supply due to meridional eddy fluxes (about 60%) and vertical mixing (about 20%, locally up to 30%). Advection by zonal jets is crucial for the establishment of the equatorial oxygen maximum. In the latitude range of the deep OMZ, it dominates the oxygen supply in the upper 300 to 400 m and generates the intermediate oxygen maximum between deep and shallow OMZs. Water mass ages from transient tracers indicate substantially older water masses in the core of the deep OMZ (about 120-180 years) compared to regions north and south of it. The deoxygenation of the ETNA OMZ during recent decades suggests a substantial imbalance in the oxygen budget: about 10% of the oxygen consumption during that period was not balanced by ventilation. Long-term oxygen observations show variability on interannual, decadal and multidecadal time scales that can partly be attributed to circulation changes. In comparison to the ETNA OMZ the eastern tropical South Pacific OMZ shows a similar structure including an equatorial oxygen maximum driven by zonal advection, but overall much lower oxygen concentrations approaching zero in extended regions. As the shape of the OMZs is set by ocean circulation, the widespread misrepresentation of the intermediate circulation in ocean circulation models substantially contributes to their oxygen bias, which might have significant impacts on predictions of future oxygen levels.
Resumo:
Oxygen minimum zones are expanding globally, and at present account for around 20-40% of oceanic nitrogen loss. Heterotrophic denitrification and anammox-anaerobic ammonium oxidation with nitrite-are responsible for most nitrogen loss in these low-oxygen waters. Anammox is particularly significant in the eastern tropical South Pacific, one of the largest oxygen minimum zones globally. However, the factors that regulate anammox-driven nitrogen loss have remained unclear. Here, we present a comprehensive nitrogen budget for the eastern tropical South Pacific oxygen minimum zone, using measurements of nutrient concentrations, experimentally determined rates of nitrogen transformation and a numerical model of export production. Anammox was the dominant mode of nitrogen loss at the time of sampling. Rates of anammox, and related nitrogen transformations, were greatest in the productive shelf waters, and tailed off with distance from the coast. Within the shelf region, anammox activity peaked in both upper and bottom waters. Overall, rates of nitrogen transformation, including anammox, were strongly correlated with the export of organic matter. We suggest that the sinking of organic matter, and thus the release of ammonium into the water column, together with benthic ammonium release, fuel nitrogen loss from oxygen minimum zones.
Resumo:
Using a Dynamic General Equilibrium (DGE) model, this study examines the effects of monetary policy in economies where minimum wages are bound. The findings show that the monetary-policy effect on a binding-minimum-wage economy is relatively small and quite persistent. This result suggests that these two characteristics of monetary policy in the minimum-wage model are rather different from those in the union-negotiation model which is often assumed to account for industrial economies.
Resumo:
This paper proposes an interleaved multiphase buck converter with minimum time control strategy for envelope amplifiers in high efficiency RF power amplifiers. The solution of the envelope amplifier is to combine the proposed converter with a linear regulator in series. High system efficiency can be obtained through modulating the supply voltage of the envelope amplifier with the fast output voltage variation of the converter working with several particular duty cycles that achieve total ripple cancellation. The transient model for minimum time control is explained, and the calculation of transient times that are pre-calculated and inserted into a look-up table is presented. The filter design trade-off that limits capability of envelope modulation is also discussed. The experimental results verify the fast voltage transient obtained with a 4-phase buck prototype.
Resumo:
In this paper, an interleaved multiphase buck converter with minimum time control strategy for envelope amplifiers in high efficiency RF power amplifiers is proposed. The solution for the envelope amplifier is to combine the proposed converter with a linear regulator in series. High efficiency of envelope amplifier can be obtained through modulating the supply voltage of the linear regulator. Instead of tracking the envelope, the buck converter has discrete output voltage that corresponding to particular duty cycles which achieve total ripple cancellation. The transient model for minimum time control is explained, and the calculation of transient times that are pre-calculated and inserted into a lookup table is presented. The filter design trade-off that limits capability of envelope modulation is also discussed. The experimental results verify the fast voltage transient obtained with a 4-phase buck prototype.
Resumo:
Power amplifier supplied with constant supply voltage has very low efficiency in the transmitter. A DC-DC converter in series with a linear regulator can be used to obtain voltage modulation. Since this converter should be able to change the output voltage very fast, a multiphase buck converter with a minimum time control strategy is proposed. To modulate supply voltage of the envelope amplifier, the multiphase converter works with some particular duty cycle (i/n, i=1, 2 ... n, n is the number of phase) to generate discrete output voltages, and in these duty cycles the output current ripple can be completely cancelled. The transition times for the minimum time are pre-calculated and inserted in a look-up table. The theoretical background, the system model that is necessary in order to calculate the transition times and the experimental results obtained with a 4-phase buck prototype are given
Resumo:
En el presente estudio se analiza la influencia de la inoculación con Azospirillum brasilense, con Pseudomonas fluorescens y la inoculación conjunta con ambas rizobacterias en las especies de plantas aromáticas Ocimum basilicum var. genovesse, Ocimum basilicum var. minimum, Petroselinum sativum var. lisa y Salvia officinalis. Se evaluará su desarrollo morfológico, atendiendo a tres parámetros: la longitud del tallo, el peso fresco y la superficie foliar. Así como el posible incremento en el contenido de aceite esencial que pueda tener la planta tratada. El cultivo se llevó a cabo en alveolos de ForestPot® 300, sobre mezcla de turba y vermiculita 3:1, con riego diario y sin adición de fertilizantes. Los resultados indican que en todas las especies y en todos los apartados estudiados, la inoculación de las rizobacterias produjo un incremento del desarrollo y del contenido de aceite esencial en comparación con el tratamiento Control, excepto en el caso de la longitud de las dos variedades de O. basilicum al inocularlas con P. fluorescens, en las que produjo una ligera disminución respecto al Control. En el caso de P. sativum var. lisa, solo las plantas que fueron inoculadas sobrevivieron. A partir de estos resultados, puede decirse que la inoculación con estas rizobacteras promotoras del crecimiento puede tener una gran importancia como sustitución de fertilizantes minerales, obteniéndose de este modo una producción más ecológica y respetuosa con el medio.
Resumo:
La evaluación de la seguridad de estructuras antiguas de fábrica es un problema abierto.El material es heterogéneo y anisótropo, el estado previo de tensiones difícil de conocer y las condiciones de contorno inciertas. A comienzos de los años 50 se demostró que el análisis límite era aplicable a este tipo de estructuras, considerándose desde entonces como una herramienta adecuada. En los casos en los que no se produce deslizamiento la aplicación de los teoremas del análisis límite estándar constituye una herramienta formidable por su simplicidad y robustez. No es necesario conocer el estado real de tensiones. Basta con encontrar cualquier solución de equilibrio, y que satisfaga las condiciones de límite del material, en la seguridad de que su carga será igual o inferior a la carga real de inicio de colapso. Además esta carga de inicio de colapso es única (teorema de la unicidad) y se puede obtener como el óptimo de uno cualquiera entre un par de programas matemáticos convexos duales. Sin embargo, cuando puedan existir mecanismos de inicio de colapso que impliquen deslizamientos, cualquier solución debe satisfacer tanto las restricciones estáticas como las cinemáticas, así como un tipo especial de restricciones disyuntivas que ligan las anteriores y que pueden plantearse como de complementariedad. En este último caso no está asegurada la existencia de una solución única, por lo que es necesaria la búsqueda de otros métodos para tratar la incertidumbre asociada a su multiplicidad. En los últimos años, la investigación se ha centrado en la búsqueda de un mínimo absoluto por debajo del cual el colapso sea imposible. Este método es fácil de plantear desde el punto de vista matemático, pero intratable computacionalmente, debido a las restricciones de complementariedad 0 y z 0 que no son ni convexas ni suaves. El problema de decisión resultante es de complejidad computacional No determinista Polinomial (NP)- completo y el problema de optimización global NP-difícil. A pesar de ello, obtener una solución (sin garantía de exito) es un problema asequible. La presente tesis propone resolver el problema mediante Programación Lineal Secuencial, aprovechando las especiales características de las restricciones de complementariedad, que escritas en forma bilineal son del tipo y z = 0; y 0; z 0 , y aprovechando que el error de complementariedad (en forma bilineal) es una función de penalización exacta. Pero cuando se trata de encontrar la peor solución, el problema de optimización global equivalente es intratable (NP-difícil). Además, en tanto no se demuestre la existencia de un principio de máximo o mínimo, existe la duda de que el esfuerzo empleado en aproximar este mínimo esté justificado. En el capítulo 5, se propone hallar la distribución de frecuencias del factor de carga, para todas las soluciones de inicio de colapso posibles, sobre un sencillo ejemplo. Para ello, se realiza un muestreo de soluciones mediante el método de Monte Carlo, utilizando como contraste un método exacto de computación de politopos. El objetivo final es plantear hasta que punto está justificada la busqueda del mínimo absoluto y proponer un método alternativo de evaluación de la seguridad basado en probabilidades. Las distribuciones de frecuencias, de los factores de carga correspondientes a las soluciones de inicio de colapso obtenidas para el caso estudiado, muestran que tanto el valor máximo como el mínimo de los factores de carga son muy infrecuentes, y tanto más, cuanto más perfecto y contínuo es el contacto. Los resultados obtenidos confirman el interés de desarrollar nuevos métodos probabilistas. En el capítulo 6, se propone un método de este tipo basado en la obtención de múltiples soluciones, desde puntos de partida aleatorios y calificando los resultados mediante la Estadística de Orden. El propósito es determinar la probabilidad de inicio de colapso para cada solución.El método se aplica (de acuerdo a la reducción de expectativas propuesta por la Optimización Ordinal) para obtener una solución que se encuentre en un porcentaje determinado de las peores. Finalmente, en el capítulo 7, se proponen métodos híbridos, incorporando metaheurísticas, para los casos en que la búsqueda del mínimo global esté justificada. Abstract Safety assessment of the historic masonry structures is an open problem. The material is heterogeneous and anisotropic, the previous state of stress is hard to know and the boundary conditions are uncertain. In the early 50's it was proven that limit analysis was applicable to this kind of structures, being considered a suitable tool since then. In cases where no slip occurs, the application of the standard limit analysis theorems constitutes an excellent tool due to its simplicity and robustness. It is enough find any equilibrium solution which satisfy the limit constraints of the material. As we are certain that this load will be equal to or less than the actual load of the onset of collapse, it is not necessary to know the actual stresses state. Furthermore this load for the onset of collapse is unique (uniqueness theorem), and it can be obtained as the optimal from any of two mathematical convex duals programs However, if the mechanisms of the onset of collapse involve sliding, any solution must satisfy both static and kinematic constraints, and also a special kind of disjunctive constraints linking the previous ones, which can be formulated as complementarity constraints. In the latter case, it is not guaranted the existence of a single solution, so it is necessary to look for other ways to treat the uncertainty associated with its multiplicity. In recent years, research has been focused on finding an absolute minimum below which collapse is impossible. This method is easy to set from a mathematical point of view, but computationally intractable. This is due to the complementarity constraints 0 y z 0 , which are neither convex nor smooth. The computational complexity of the resulting decision problem is "Not-deterministic Polynomialcomplete" (NP-complete), and the corresponding global optimization problem is NP-hard. However, obtaining a solution (success is not guaranteed) is an affordable problem. This thesis proposes solve that problem through Successive Linear Programming: taking advantage of the special characteristics of complementarity constraints, which written in bilinear form are y z = 0; y 0; z 0 ; and taking advantage of the fact that the complementarity error (bilinear form) is an exact penalty function. But when it comes to finding the worst solution, the (equivalent) global optimization problem is intractable (NP-hard). Furthermore, until a minimum or maximum principle is not demonstrated, it is questionable that the effort expended in approximating this minimum is justified. XIV In chapter 5, it is proposed find the frequency distribution of the load factor, for all possible solutions of the onset of collapse, on a simple example. For this purpose, a Monte Carlo sampling of solutions is performed using a contrast method "exact computation of polytopes". The ultimate goal is to determine to which extent the search of the global minimum is justified, and to propose an alternative approach to safety assessment based on probabilities. The frequency distributions for the case study show that both the maximum and the minimum load factors are very infrequent, especially when the contact gets more perfect and more continuous. The results indicates the interest of developing new probabilistic methods. In Chapter 6, is proposed a method based on multiple solutions obtained from random starting points, and qualifying the results through Order Statistics. The purpose is to determine the probability for each solution of the onset of collapse. The method is applied (according to expectations reduction given by the Ordinal Optimization) to obtain a solution that is in a certain percentage of the worst. Finally, in Chapter 7, hybrid methods incorporating metaheuristics are proposed for cases in which the search for the global minimum is justified.
Resumo:
En este trabajo se da un ejemplo de un conjunto de n puntos situados en posición general, en el que se alcanza el mínimo número de puntos que pueden formar parte de algún k-set para todo k con 1menor que=kmenor quen/2. Se generaliza también, a puntos en posición no general, el resultado de Erdõs et al., 1973, sobre el mínimo número de puntos que pueden formar parte de algún k-set. The study of k- sets is a very relevant topic in the research area of computational geometry. The study of the maximum and minimum number of k-sets in sets of points of the plane in general position, specifically, has been developed at great length in the literature. With respect to the maximum number of k-sets, lower bounds for this maximum have been provided by Erdõs et al., Edelsbrunner and Welzl, and later by Toth. Dey also stated an upper bound for this maximum number of k-sets. With respect to the minimum number of k-set, this has been stated by Erdos el al. and, independently, by Lovasz et al. In this paper the authors give an example of a set of n points in the plane in general position (no three collinear), in which the minimum number of points that can take part in, at least, a k-set is attained for every k with 1 ≤ k < n/2. The authors also extend Erdos’s result about the minimum number of points in general position which can take part in a k-set to a set of n points not necessarily in general position. That is why this work complements the classic works we have mentioned before.
Resumo:
In this paper, calculus of variations and combined blade element and momentum theory (BEMT) are used to demonstrate that, in hover, when neither root nor tip losses are considered; the rotor, which minimizes the total power (MPR), generates an induced velocity that varies linearly along the blade span. The angle of attack of every blade element is constant and equal to its optimum value. The traditional ideal twist (ITR) and optimum (OR) rotors are revisited in the context of this variational framework. Two more optimum rotors are obtained considering root and tip losses, the ORL, and the MPRL. A comparison between these five rotors is presented and discussed. The MPR and MPRL present a remarkable saving of power for low values of both thrust coefficient and maximum aerodynamic efficiency. The result obtained can be exploited to improve the aerodynamic behaviour of rotary wing micro air vehicles (MAV). A comparison with experimental results obtained from the literature is presented.
Resumo:
In this paper we propose four approximation algorithms (metaheuristic based), for the Minimum Vertex Floodlight Set problem. Urrutia et al. [9] solved the combinatorial problem, although it is strongly believed that the algorithmic problem is NP-hard. We conclude that, on average, the minimum number of vertex floodlights needed to illuminate a orthogonal polygon with n vertices is n/4,29.
Resumo:
The problem of finding a minimum area polygonization for a given set of points in the plane, Minimum Area Polygonization (MAP) is NP-hard. Due to the complexity of the problem we aim at the development of algorithms to obtain approximate solutions. In this work, we suggest di?erent strategies in order to minimize the polygonization area.We propose algorithms to search for approximate solutions for MAP problem. We present an experimental study for a set of instances for MAP problem.