917 resultados para Inverse Problem in Optics
Resumo:
The coexistence of a large number of phytoplankton species on a seemingly limited variety of resources is a classical problem in ecology, known as ‘the paradox of the plankton’. Strong fluctuations in species abundance due to the external factors or competitive interactions leading to oscillations, chaos and short-term equilibria have been cited so far to explain multi-species coexistence and biodiversity of phytoplankton. However, none of the explanations has been universally accepted. The qualitative view and statistical analysis of our field data establish two distinct roles of toxin-producing phytoplankton (TPP): toxin allelopathy weakens the interspecific competition among phytoplankton groups and the inhibition due to ingestion of toxic substances reduces the abundance of the grazer zooplankton. Structuring the overall plankton population as a combination of nontoxic phytoplankton (NTP), toxic phytoplankton, and zooplankton, here we offer a novel solution to the plankton paradox governed by the activity of TPP. We demonstrate our findings through qualitative analysis of our sample data followed by analysis of a mathematical model.
Resumo:
Variational data assimilation is commonly used in environmental forecasting to estimate the current state of the system from a model forecast and observational data. The assimilation problem can be written simply in the form of a nonlinear least squares optimization problem. However the practical solution of the problem in large systems requires many careful choices to be made in the implementation. In this article we present the theory of variational data assimilation and then discuss in detail how it is implemented in practice. Current solutions and open questions are discussed.
Resumo:
We are looking into variants of a domination set problem in social networks. While randomised algorithms for solving the minimum weighted domination set problem and the minimum alpha and alpha-rate domination problem on simple graphs are already present in the literature, we propose here a randomised algorithm for the minimum weighted alpha-rate domination set problem which is, to the best of our knowledge, the first such algorithm. A theoretical approximation bound based on a simple randomised rounding technique is given. The algorithm is implemented in Python and applied to a UK Twitter mentions networks using a measure of individuals’ influence (klout) as weights. We argue that the weights of vertices could be interpreted as the costs of getting those individuals on board for a campaign or a behaviour change intervention. The minimum weighted alpha-rate dominating set problem can therefore be seen as finding a set that minimises the total cost and each individual in a network has at least alpha percentage of its neighbours in the chosen set. We also test our algorithm on generated graphs with several thousand vertices and edges. Our results on this real-life Twitter networks and generated graphs show that the implementation is reasonably efficient and thus can be used for real-life applications when creating social network based interventions, designing social media campaigns and potentially improving users’ social media experience.
Resumo:
The research network “Basic Concepts for Convection Parameterization in Weather Forecast and Climate Models” was organized with European funding (COST Action ES0905) for the period of 2010–2014. Its extensive brainstorming suggests how the subgrid-scale parameterization problem in atmospheric modeling, especially for convection, can be examined and developed from the point of view of a robust theoretical basis. Our main cautions are current emphasis on massive observational data analyses and process studies. The closure and the entrainment–detrainment problems are identified as the two highest priorities for convection parameterization under the mass–flux formulation. The need for a drastic change of the current European research culture as concerns policies and funding in order not to further deplete the visions of the European researchers focusing on those basic issues is emphasized.
Resumo:
We study the scaling properties and Kraichnan–Leith–Batchelor (KLB) theory of forced inverse cascades in generalized two-dimensional (2D) fluids (α-turbulence models) simulated at resolution 8192x8192. We consider α=1 (surface quasigeostrophic flow), α=2 (2D Euler flow) and α=3. The forcing scale is well resolved, a direct cascade is present and there is no large-scale dissipation. Coherent vortices spanning a range of sizes, most larger than the forcing scale, are present for both α=1 and α=2. The active scalar field for α=3 contains comparatively few and small vortices. The energy spectral slopes in the inverse cascade are steeper than the KLB prediction −(7−α)/3 in all three systems. Since we stop the simulations well before the cascades have reached the domain scale, vortex formation and spectral steepening are not due to condensation effects; nor are they caused by large-scale dissipation, which is absent. One- and two-point p.d.f.s, hyperflatness factors and structure functions indicate that the inverse cascades are intermittent and non-Gaussian over much of the inertial range for α=1 and α=2, while the α=3 inverse cascade is much closer to Gaussian and non-intermittent. For α=3 the steep spectrum is close to that associated with enstrophy equipartition. Continuous wavelet analysis shows approximate KLB scaling ℰ(k)∝k−2 (α=1) and ℰ(k)∝k−5/3 (α=2) in the interstitial regions between the coherent vortices. Our results demonstrate that coherent vortex formation (α=1 and α=2) and non-realizability (α=3) cause 2D inverse cascades to deviate from the KLB predictions, but that the flow between the vortices exhibits KLB scaling and non-intermittent statistics for α=1 and α=2.
Resumo:
In this paper, we develop an energy-efficient resource-allocation scheme with proportional fairness for downlink multiuser orthogonal frequency-division multiplexing (OFDM) systems with distributed antennas. Our aim is to maximize energy efficiency (EE) under the constraints of the overall transmit power of each remote access unit (RAU), proportional fairness data rates, and bit error rates (BERs). Because of the nonconvex nature of the optimization problem, obtaining the optimal solution is extremely computationally complex. Therefore, we develop a low-complexity suboptimal algorithm, which separates subcarrier allocation and power allocation. For the low-complexity algorithm, we first allocate subcarriers by assuming equal power distribution. Then, by exploiting the properties of fractional programming, we transform the nonconvex optimization problem in fractional form into an equivalent optimization problem in subtractive form, which includes a tractable solution. Next, an optimal energy-efficient power-allocation algorithm is developed to maximize EE while maintaining proportional fairness. Through computer simulation, we demonstrate the effectiveness of the proposed low-complexity algorithm and illustrate the fundamental trade off between energy and spectral-efficient transmission designs.
Resumo:
The vast majority of putative solutions to the liar paradox face the infamous revenge problem. In recent work, however, Kevin Scharp has extensively developed an exciting and highly novel ‘inconsistency approach’ to the paradox that, he claims, does not face revenge. If Scharp is right, then this represents a significant step forward in our attempts to solve the liar paradox. However, in this paper, I raise a revenge problem that faces Scharp’s inconsistency approach.
Resumo:
A truly variance-minimizing filter is introduced and its per for mance is demonstrated with the Korteweg– DeV ries (KdV) equation and with a multilayer quasigeostrophic model of the ocean area around South Africa. It is recalled that Kalman-like filters are not variance minimizing for nonlinear model dynamics and that four - dimensional variational data assimilation (4DV AR)-like methods relying on per fect model dynamics have dif- ficulty with providing error estimates. The new method does not have these drawbacks. In fact, it combines advantages from both methods in that it does provide error estimates while automatically having balanced states after analysis, without extra computations. It is based on ensemble or Monte Carlo integrations to simulate the probability density of the model evolution. When obser vations are available, the so-called importance resampling algorithm is applied. From Bayes’ s theorem it follows that each ensemble member receives a new weight dependent on its ‘ ‘distance’ ’ t o the obser vations. Because the weights are strongly var ying, a resampling of the ensemble is necessar y. This resampling is done such that members with high weights are duplicated according to their weights, while low-weight members are largely ignored. In passing, it is noted that data assimilation is not an inverse problem by nature, although it can be for mulated that way . Also, it is shown that the posterior variance can be larger than the prior if the usual Gaussian framework is set aside. However , i n the examples presented here, the entropy of the probability densities is decreasing. The application to the ocean area around South Africa, gover ned by strongly nonlinear dynamics, shows that the method is working satisfactorily . The strong and weak points of the method are discussed and possible improvements are proposed.
Resumo:
Nonlinear data assimilation is high on the agenda in all fields of the geosciences as with ever increasing model resolution and inclusion of more physical (biological etc.) processes, and more complex observation operators the data-assimilation problem becomes more and more nonlinear. The suitability of particle filters to solve the nonlinear data assimilation problem in high-dimensional geophysical problems will be discussed. Several existing and new schemes will be presented and it is shown that at least one of them, the Equivalent-Weights Particle Filter, does indeed beat the curse of dimensionality and provides a way forward to solve the problem of nonlinear data assimilation in high-dimensional systems.
Resumo:
1. Comparative analyses are used to address the key question of what makes a species more prone to extinction by exploring the links between vulnerability and intrinsic species’ traits and/or extrinsic factors. This approach requires comprehensive species data but information is rarely available for all species of interest. As a result comparative analyses often rely on subsets of relatively few species that are assumed to be representative samples of the overall studied group. 2. Our study challenges this assumption and quantifies the taxonomic, spatial, and data type biases associated with the quantity of data available for 5415 mammalian species using the freely available life-history database PanTHERIA. 3. Moreover, we explore how existing biases influence results of comparative analyses of extinction risk by using subsets of data that attempt to correct for detected biases. In particular, we focus on links between four species’ traits commonly linked to vulnerability (distribution range area, adult body mass, population density and gestation length) and conduct univariate and multivariate analyses to understand how biases affect model predictions. 4. Our results show important biases in data availability with c.22% of mammals completely lacking data. Missing data, which appear to be not missing at random, occur frequently in all traits (14–99% of cases missing). Data availability is explained by intrinsic traits, with larger mammals occupying bigger range areas being the best studied. Importantly, we find that existing biases affect the results of comparative analyses by overestimating the risk of extinction and changing which traits are identified as important predictors. 5. Our results raise concerns over our ability to draw general conclusions regarding what makes a species more prone to extinction. Missing data represent a prevalent problem in comparative analyses, and unfortunately, because data are not missing at random, conventional approaches to fill data gaps, are not valid or present important challenges. These results show the importance of making appropriate inferences from comparative analyses by focusing on the subset of species for which data are available. Ultimately, addressing the data bias problem requires greater investment in data collection and dissemination, as well as the development of methodological approaches to effectively correct existing biases.
Resumo:
In this work, we prove a weak Noether-type Theorem for a class of variational problems that admit broken extremals. We use this result to prove discrete Noether-type conservation laws for a conforming finite element discretisation of a model elliptic problem. In addition, we study how well the finite element scheme satisfies the continuous conservation laws arising from the application of Noether’s first theorem (1918). We summarise extensive numerical tests, illustrating the conservation of the discrete Noether law using the p-Laplacian as an example and derive a geometric-based adaptive algorithm where an appropriate Noether quantity is the goal functional.
Resumo:
Trust and reputation are important factors that influence the success of both traditional transactions in physical social networks and modern e-commerce in virtual Internet environments. It is difficult to define the concept of trust and quantify it because trust has both subjective and objective characteristics at the same time. A well-reported issue with reputation management system in business-to-consumer (BtoC) e-commerce is the “all good reputation” problem. In order to deal with the confusion, a new computational model of reputation is proposed in this paper. The ratings of each customer are set as basic trust score events. In addition, the time series of massive ratings are aggregated to formulate the sellers’ local temporal trust scores by Beta distribution. A logical model of trust and reputation is established based on the analysis of the dynamical relationship between trust and reputation. As for single goods with repeat transactions, an iterative mathematical model of trust and reputation is established with a closed-loop feedback mechanism. Numerical experiments on repeated transactions recorded over a period of 24 months are performed. The experimental results show that the proposed method plays guiding roles for both theoretical research into trust and reputation and the practical design of reputation systems in BtoC e-commerce.
Resumo:
The Team Formation problem (TFP) has become a well-known problem in the OR literature over the last few years. In this problem, the allocation of multiple individuals that match a required set of skills as a group must be chosen to maximise one or several social positive attributes. Speci�cally, the aim of the current research is two-fold. First, two new dimensions of the TFP are added by considering multiple projects and fractions of people's dedication. This new problem is named the Multiple Team Formation Problem (MTFP). Second, an optimization model consisting in a quadratic objective function, linear constraints and integer variables is proposed for the problem. The optimization model is solved by three algorithms: a Constraint Programming approach provided by a commercial solver, a Local Search heuristic and a Variable Neighbourhood Search metaheuristic. These three algorithms constitute the first attempt to solve the MTFP, being a variable neighbourhood local search metaheuristic the most effi�cient in almost all cases. Applications of this problem commonly appear in real-life situations, particularly with the current and ongoing development of social network analysis. Therefore, this work opens multiple paths for future research.
Resumo:
Various authors have suggested that the gamma-ray burst (GRB) central engine is a rapidly rotating, strongly magnetized, (similar to 10(15)-10(16) G) compact object. The strong magnetic field can accelerate and collimate the relativistic flow and the rotation of the compact object can be the energy source of the GRB. The major problem in this scenario is the difficulty of finding an astrophysical mechanism for obtaining such intense fields. Whereas, in principle, a neutron star could maintain such strong fields, it is difficult to justify a scenario for their creation. If the compact object is a black hole, the problem is more difficult since, according to general relativity it has ""no hair"" (i.e., no magnetic field). Schuster, Blackett, Pauli, and others have suggested that a rotating neutral body can create a magnetic field by non-minimal gravitational-electromagnetic coupling (NMGEC). The Schuster-Blackett form of NMGEC was obtained from the Mikhail and Wanas`s tetrad theory of gravitation (MW). We call the general theory NMGEC-MW. We investigate here the possible origin of the intense magnetic fields similar to 10(15)-10(16) G in GRBs by NMGEC-MW. Whereas these fields are difficult to explain astrophysically, we find that they are easily explained by NMGEC-MW. It not only explains the origin of the similar to 10(15)-10(16) G fields when the compact object is a neutron star, but also when it is a black hole.