57 resultados para Convex infinite programming


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Should computer programming be taught within schools of architecture?

Incorporating even low-level computer programming within architectural education curricula is a matter of debate but we have found it useful to do so for two reasons: as an introduction or at least a consolidation of the realm of descriptive geometry and in providing an environment for experimenting in morphological time-based change.

Mathematics and descriptive geometry formed a significant proportion of architectural education until the end of the 19th century. This proportion has declined in contemporary curricula, possibly at some cost for despite major advances in automated manufacture, Cartesian measurement is still the principal ‘language’ with which to describe building for construction purposes. When computer programming is used as a platform for instruction in logic and spatial representation, the waning interest in mathematics as a basis for spatial description can be readdressed using a left-field approach. Students gain insights into topology, Cartesian space and morphology through programmatic form finding, as opposed to through direct manipulation.

In this context, it matters to the architect-programmer how the program operates more than what it does. This paper describes an assignment where students are given a figurative conceptual space comprising the three Cartesian axes with a cube at its centre. Six Phileban solids mark the Cartesian axial limits to the space. Any point in this space represents a hybrid of one, two or three transformations from the central cube towards the various Phileban solids. Students are asked to predict the topological and morphological outcomes of the operations. Through programming, they become aware of morphogenesis and hybridisation. Here we articulate the hypothesis above and report on the outcome from a student group, whose work reveals wider learning opportunities for architecture students in computer programming than conventionally assumed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examine numerical performance of various methods of calculation of the Conditional Value-at-risk (CVaR), and portfolio optimization with respect to this risk measure. We concentrate on the method proposed by Rockafellar and Uryasev in (Rockafellar, R.T. and Uryasev, S., 2000, Optimization of conditional value-at-risk. Journal of Risk, 2, 21-41), which converts this problem to that of convex optimization. We compare the use of linear programming techniques against a non-smooth optimization method of the discrete gradient, and establish the supremacy of the latter. We show that non-smooth optimization can be used efficiently for large portfolio optimization, and also examine parallel execution of this method on computer clusters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Richard Fung, a Toronto-based video artist and cultural critic, was born in Trinidad in 1954, and attended school in Ireland before immigrating to Canada to study at the University of Toronto. Richard Fung has taught at the Ontario College of Art and Design, and has been a visiting professor in the Department of Media Study at the State University of New York in Buffalo. He is currently the coordinator of the Centre for Media and Culture in Education, Ontario Institute for Studies in Education, University of Toronto.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We discuss the problem of learning fuzzy measures from empirical data. Values of the discrete Choquet integral are fitted to the data in the least absolute deviation sense. This problem is solved by linear programming techniques. We consider the cases when the data are given on the numerical and interval scales. An open source programming library which facilitates calculations involving fuzzy measures and their learning from data is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite a growing body of evidence pointing to the central role of negative emotional states in the offence process, there has been relatively little work, either theoretical or applied, investigating this area. This paper offers a review of the literature that has sought to investigate the association between negative emotion and offending. It is concluded that there are grounds to consider negative emotional states as important dynamic risk factors that should be addressed as part of any psychological intervention to reduce the risk of re-offending amongst forensic clients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Classification learning is dominated by systems which induce large numbers of small axis-orthogonal decision surfaces. This strongly biases such systems towards particular hypothesis types but there is reason believe that many domains have underlying concepts which do not involve axis orthogonal surfaces. Further, the multiplicity of small decision regions mitigates against any holistic appreciation of the theories produced by these systems, notwithstanding the fact that many of the small regions are individually comprehensible. This thesis investigates modeling concepts as large geometric structures in n-dimensional space. Convex hulls are a superset of the set of axis orthogonal hyperrectangles into which axis orthogonal systems partition the instance space. In consequence, there is reason to believe that convex hulls might provide a more flexible and general learning bias than axis orthogonal regions. The formation of convex hulls around a group of points of the same class is shown to be a usable generalisation and is more general than generalisations produced by axis-orthogonal based classifiers, without constructive induction, like decision trees, decision lists and rules. The use of a small number of large hulls as a concept representation is shown to provide classification performance which can be better than that of classifiers which use a large number of small fragmentary regions for each concept. A convex hull based classifier, CH1, has been implemented and tested. CH1 can handle categorical and continuous data. Algorithms for two basic generalisation operations on hulls, inflation and facet deletion, are presented. The two operations are shown to improve the accuracy of the classifier and provide moderate classification accuracy over a representative selection of typical, largely or wholly continuous valued machine learning tasks. The classifier exhibits superior performance to well-known axis-orthogonal-based classifiers when presented with domains where the underlying decision surfaces are not axis parallel. The strengths and weaknesses of the system are identified. One particular advantage is the ability of the system to model domains with approximately the same number of structures as there are underlying concepts. This leads to the possibility of extraction of higher level mathematical descriptions of the induced concepts, using the techniques of computational geometry, which is not possible from a multiplicity of small regions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of the research is to investigate factors that may explain success in elementary computer programming at the tertiary level. The first phase of the research included the identification of possible explanatory factors through a literature review, a survey of students studying introductory computing, a focus-group session with teachers of computer programming and interviews with programming students. The second phase of the research that was called the main study, involved testing the identified factors. Two different groups of programming students - one group majoring in business computing and another majoring in computer science - completed a survey questionnaire. The findings of the research are as follows. Gender is of little significance for business students but there is an adverse gender penalty for females in computer science. Secondary school assessment is inversely related to outcomes in business computing but directly influences outcomes in the first programming unit in the computer science course. As in prior research, previous knowledge and experience were demonstrated to matter, A range of other variables was found to be of little importance. The research suggests that different problem-solving techniques might be relevant in business compared with those of use in computer science.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article examines the construction of aggregation functions from data by minimizing the least absolute deviation criterion. We formulate various instances of such problems as linear programming problems. We consider the cases in which the data are provided as intervals, and the outputs ordering needs to be preserved, and show that linear programming formulation is valid for such cases. This feature is very valuable in practice, since the standard simplex method can be used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a discrete-time sequential stochastic asset-selling problem with an infinite planning horizon, where the process of selling the asset may reach a deadline at any point in time with a probability. It is assumed that a quitting offer is available at every point in time and search skipping is permitted. Thus, decisions must be made as to whether or not to accept the quitting offer, to accept an appearing buyer’s offer, and to conduct a search for a buyer. The main purpose of this paper is to clarify the properties of the optimal decision rules in relation to the model’s parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Any attempt to model an economy requires foundational assumptions about the relations between prices, values and the distribution of wealth. These assumptions exert a profound influence over the results of any model. Unfortunately, there are few areas in economics as vexed as the theory of value. I argue in this paper that the fundamental problem with past theories of value is that it is simply not possible to model the determination of value, the formation of prices and the distribution of income in a real economy with analytic mathematical models. All such attempts leave out crucial processes or make unrealistic assumptions which significantly affect the results. There have been two primary approaches to the theory of value. The first, associated with classical economists such as Ricardo and Marx were substance theories of value, which view value as a substance inherent in an object and which is conserved in exchange. For Marxists, the value of a commodity derives solely from the value of the labour power used to produce it - and therefore any profit is due to the exploitation of the workers. The labour theory of value has been discredited because of its assumption that labour was the only ‘factor’ that contributed to the creation of value, and because of its fundamentally circular argument. Neoclassical theorists argued that price was identical with value and was determined purely by the interaction of supply and demand. Value then, was completely subjective. Returns to labour (wages) and capital (profits) were determined solely by their marginal contribution to production, so that each factor received its just reward by definition. Problems with the neoclassical approach include assumptions concerning representative agents, perfect competition, perfect and costless information and contract enforcement, complete markets for credit and risk, aggregate production functions and infinite, smooth substitution between factors, distribution according to marginal products, firms always on the production possibility frontier and firms’ pricing decisions, ignoring money and credit, and perfectly rational agents with infinite computational capacity. Two critical areas include firstly, the underappreciated Sonnenschein-Mantel- Debreu results which showed that the foundational assumptions of the Walrasian general-equilibrium model imply arbitrary excess demand functions and therefore arbitrary equilibrium price sets. Secondly, in real economies, there is no equilibrium, only continuous change. Equilibrium is never reached because of constant changes in preferences and tastes; technological and organisational innovations; discoveries of new resources and new markets; inaccurate and evolving expectations of businesses, consumers, governments and speculators; changing demand for credit; the entry and exit of firms; the birth, learning, and death of citizens; changes in laws and government policies; imperfect information; generalized increasing returns to scale; random acts of impulse; weather and climate events; changes in disease patterns, and so on. The problem is not the use of mathematical modelling, but the kind of mathematical modelling used. Agent-based models (ABMs), objectoriented programming and greatly increased computer power however, are opening up a new frontier. Here a dynamic bargaining ABM is outlined as a basis for an alternative theory of value. A large but finite number of heterogeneous commodities and agents with differing degrees of market power are set in a spatial network. Returns to buyers and sellers are decided at each step in the value chain, and in each factor market, through the process of bargaining. Market power and its potential abuse against the poor and vulnerable are fundamental to how the bargaining dynamics play out. Ethics therefore lie at the very heart of economic analysis, the determination of prices and the distribution of wealth. The neoclassicals are right then that price is the enumeration of value at a particular time and place, but wrong to downplay the critical roles of bargaining, power and ethics in determining those same prices.