68 resultados para Options (Finance) -- Mathematical models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Different European institutions have developed mathematical models to propose maximum safe levels either for fortified foods or for dietary supplements. The objective of the present study was to compare and check the safety of these different maximum safe levels (MSL) by using a probabilistic risk assessment approach. The potential maximum nutritional intakes were estimated by taking into account all sources of intakes (base diet, fortified foods and dietary supplements) and compared with the tolerable upper intake levels for vitamins and minerals. This approach simulated the consequences of both food fortification and supplementation in terms of food safety. Different scenarios were tested. They are the result of the combination of several MSL obtained using the previous models. The study was based on the second French Individual and National Study on Food Consumption performed in 2006–7, matched with the French food nutritional composition database. The analyses were based on a sample of 1918 adults aged 18–79 years. Some MSL in fortified foods and dietary supplements obtained independently were protective enough, although some others could lead to nutritional intakes above the tolerable upper intake levels. The simulation showed that it is crucial to consider the inter-individual variability of fortified food intakes when setting MSL for foods and supplements. The risk assessment approach developed here by integrating the MSL for fortified foods and dietary supplements is useful for ensuring consumer protection. It may be subsequently used to test any other MSL for vitamins and minerals proposed in the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Any attempt to model an economy requires foundational assumptions about the relations between prices, values and the distribution of wealth. These assumptions exert a profound influence over the results of any model. Unfortunately, there are few areas in economics as vexed as the theory of value. I argue in this paper that the fundamental problem with past theories of value is that it is simply not possible to model the determination of value, the formation of prices and the distribution of income in a real economy with analytic mathematical models. All such attempts leave out crucial processes or make unrealistic assumptions which significantly affect the results. There have been two primary approaches to the theory of value. The first, associated with classical economists such as Ricardo and Marx were substance theories of value, which view value as a substance inherent in an object and which is conserved in exchange. For Marxists, the value of a commodity derives solely from the value of the labour power used to produce it - and therefore any profit is due to the exploitation of the workers. The labour theory of value has been discredited because of its assumption that labour was the only ‘factor’ that contributed to the creation of value, and because of its fundamentally circular argument. Neoclassical theorists argued that price was identical with value and was determined purely by the interaction of supply and demand. Value then, was completely subjective. Returns to labour (wages) and capital (profits) were determined solely by their marginal contribution to production, so that each factor received its just reward by definition. Problems with the neoclassical approach include assumptions concerning representative agents, perfect competition, perfect and costless information and contract enforcement, complete markets for credit and risk, aggregate production functions and infinite, smooth substitution between factors, distribution according to marginal products, firms always on the production possibility frontier and firms’ pricing decisions, ignoring money and credit, and perfectly rational agents with infinite computational capacity. Two critical areas include firstly, the underappreciated Sonnenschein-Mantel- Debreu results which showed that the foundational assumptions of the Walrasian general-equilibrium model imply arbitrary excess demand functions and therefore arbitrary equilibrium price sets. Secondly, in real economies, there is no equilibrium, only continuous change. Equilibrium is never reached because of constant changes in preferences and tastes; technological and organisational innovations; discoveries of new resources and new markets; inaccurate and evolving expectations of businesses, consumers, governments and speculators; changing demand for credit; the entry and exit of firms; the birth, learning, and death of citizens; changes in laws and government policies; imperfect information; generalized increasing returns to scale; random acts of impulse; weather and climate events; changes in disease patterns, and so on. The problem is not the use of mathematical modelling, but the kind of mathematical modelling used. Agent-based models (ABMs), objectoriented programming and greatly increased computer power however, are opening up a new frontier. Here a dynamic bargaining ABM is outlined as a basis for an alternative theory of value. A large but finite number of heterogeneous commodities and agents with differing degrees of market power are set in a spatial network. Returns to buyers and sellers are decided at each step in the value chain, and in each factor market, through the process of bargaining. Market power and its potential abuse against the poor and vulnerable are fundamental to how the bargaining dynamics play out. Ethics therefore lie at the very heart of economic analysis, the determination of prices and the distribution of wealth. The neoclassicals are right then that price is the enumeration of value at a particular time and place, but wrong to downplay the critical roles of bargaining, power and ethics in determining those same prices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A key component of many decision making processes is the aggregation step, whereby a set of numbers is summarised with a single representative value. This research showed that aggregation functions can provide a mathematical formalism to deal with issues like vagueness and uncertainty, which arise naturally in various decision contexts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study theoretically the dynamics of film thinning under the action of an attractive surface force near the point of a jump instability. Our approach is illustrated by modeling van der Waals and hydrophobic attractive forces. The main result is that with the hydrophobic force law reported previously it is often impossible to establish the jump separation with any certainty. The surfaces instead approach slowly from a distance which is much larger than the point where an actual jump is expected. We conclude that an attractive force measured by the static jump technique is overestimated, and we formulate principles of a new dynamic jump method. The use of this new technique would permit direct measurements of attractive forces at separations below the static jump distance down to contact of the surfaces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Carbon fibre reinforced polymer (CFRP) has been used frequently to retrofit concrete structures. Strengthening efficiency is related to the CFRP application process and the characteristics of the bonding agent. In this paper the mechanism of interface shear behaviour in CFRP to concrete beams is discussed considering previous test observations and mathematical models. This paper then discusses the consequences of introducing interface slip which reduces the integrity of the composite section, however improve ductility and delay debonding failure. The paper suggests that using softer bonding agent as well as setting limits on the interface slip could ensure acceptable serviceability and ductile behaviour.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We classify all the different kinds of errors that can occur in edge detection and then develop measures for quantifying these errors. It is shown that these sets of measures are complete and independent and form necessary components of an edge-evaluation scheme. The principle that an edge-evaluation measure should have certain qualitative properties is used to develop a method for combining these error components into a single combined measure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present a method for recognising an agent's behaviour in dynamic, noisy, uncertain domains, and across multiple levels of abstraction. We term this problem on-line plan recognition under uncertainty and view it generally as probabilistic inference on the stochastic process representing the execution of the agent's plan. Our contributions in this paper are twofold. In terms of probabilistic inference, we introduce the Abstract Hidden Markov Model (AHMM), a novel type of stochastic processes, provide its dynamic Bayesian network (DBN) structure and analyse the properties of this network. We then describe an application of the Rao-Blackwellised Particle Filter to the AHMM which allows us to construct an efficient, hybrid inference method for this model. In terms of plan recognition, we propose a novel plan recognition framework based on the AHMM as the plan execution model. The Rao-Blackwellised hybrid inference for AHMM can take advantage of the independence properties inherent in a model of plan execution, leading to an algorithm for online probabilistic plan recognition that scales well with the number of levels in the plan hierarchy. This illustrates that while stochastic models for plan execution can be complex, they exhibit special structures which, if exploited, can lead to efficient plan recognition algorithms. We demonstrate the usefulness of the AHMM framework via a behaviour recognition system in a complex spatial environment using distributed video surveillance data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a model for space in which an autonomous agent acquires information about its environment. The agent uses a predefined exploration strategy to build a map allowing it to navigate and deduce relationships between points in space. The shapes of objects in the environment are represented qualitatively. This shape information is deduced from the agent's motion. Normally, in a qualitative model, directional information degrades under transitive deduction. By reasoning about the shape of the environment, the agent can match visual events to points on the objects. This strengthens the model by allowing further relationships to be deduced. In particular, points that are separated by long distances, or complex surfaces, can be related by line-of-sight. These relationships are deduced without incorporating any metric information into the model. Examples are given to demonstrate the use of the model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In multiagent systems, an agent does not usually have complete information about the preferences and decision making processes of other agents. This might prevent the agents from making coordinated choices, purely due to their ignorance of what others want. This paper describes the integration of a learning module into a communication-intensive negotiating agent architecture. The learning module gives the agents the ability to learn about other agents' preferences via past interactions. Over time, the agents can incrementally update their models of other agents' preferences and use them to make better coordinated decisions. Combining both communication and learning, as two complement knowledge acquisition methods, helps to reduce the amount of communication needed on average, and is justified in situations where communication is computationally costly or simply not desirable (e.g. to preserve the individual privacy).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an efficient evaluation algorithm for cross-validating the two-stage approach of KFD classifiers. The proposed algorithm is of the same complexity level as the existing indirect efficient cross-validation methods but it is more reliable since it is direct and constitutes exact cross-validation for the KFD classifier formulation. Simulations demonstrate that the proposed algorithm is almost as fast as the existing fast indirect evaluation algorithm and the twostage cross-validation selects better models on most of the thirteen benchmark data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a novel technique for the recognition of complex human gestures for video annotation using accelerometers and the hidden Markov model. Our extension to the standard hidden Markov model allows us to consider gestures at different levels of abstraction through a hierarchy of hidden states. Accelerometers in the form of wrist bands are attached to humans performing intentional gestures, such as umpires in sports. Video annotation is then performed by populating the video with time stamps indicating significant events, where a particular gesture occurs. The novelty of the technique lies in the development of a probabilistic hierarchical framework for complex gesture recognition and the use of accelerometers to extract gestures and significant events for video annotation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper deals with the problem ofstructuralizing education and training videos for high-level semantics extraction and nonlinear media presentation in e-learning applications. Drawing guidance from production knowledge in instructional media, we propose six main narrative structures employed in education and training videos for both motivation and demonstration during learning and practical training. We devise a powerful audiovisual feature set, accompanied by a hierarchical decision tree-based classification system to determine and discriminate between these structures. Based on a two-liered hierarchical model, we demonstrate that we can achieve an accuracy of 84.7% on a comprehensive set of education and training video data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gait classification is a developing research area, particularly with regards to biometrics. It aims to use the distinctive spatial and temporal characteristics of human motion to classify differing activities. As a biometric, this extends to recognising different people by the heterogeneous aspects of their gait. This research aims to use a modified deformable model, the temporal PDM, to distinguish the movements of a walking and miming person. The movement of 2D points on the moving form is used to provide input into the model and classify the type of gait present.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background elimination models are widely used in motion tracking systems. Our aim is to develop a system that performs reliably under adverse lighting conditions. In particular, this includes indoor scenes lit partly or entirely by diffuse natural light. We present a modified "median value" model in which the detection threshold adapts to global changes in illumination. The responses of several models are compared, demonstrating the effectiveness of the new model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Voltammetric behavior at gold electrodes in aqueous media is known to be strongly dependent on electrode polishing and history. In this study, an electrode array consisting of 100 nominally identical and individually addressable gold disks electrodes, each with a radius of 127 µm, has been fabricated. The ability to analyze both individual electrode and total array performance enables microscopic aspects of the overall voltammetric response arising from variable levels of inhomogeneity in each electrode to be identified. The array configuration was initially employed with the reversible and hence relatively surface insensitive [Ru(NH3)6]3+/2+ reaction and then with the more highly surface sensitive quasi-reversible [Fe(CN)6]3−/4− process. In both these cases, the reactants and products are solution soluble and, at a scan rate of 50 mV s−1, each electrode in the array is assumed to behave independently, since no evidence of overlapping of the diffusion layers was detected. As would be expected, the variability of the individual electrodesʼ responses was significantly larger than found for the summed electrode behavior. In the case of cytochrome c voltammetry at a 4,4′-dipyridyl disulfide modified electrode, a far greater dependence on electrode history and electrode heterogeneity was detected. In this case, voltammograms derived from individual electrodes in the gold array electrode exhibit shape variations ranging from peak to sigmoidal. However, again the total response was always found to be well-defined. This voltammetry is consistent with a microscopic model of heterogeneity where some parts of each chemically modified electrode surface are electroactive while other parts are less active. The findings are consistent with the common existence of electrode heterogeneity in cyclic voltammetric responses at gold electrodes, that are normally difficult to detect, but fundamentally important, as electrode nonuniformity can give rise to subtle forms of kinetic and other forms of dispersion.