959 resultados para Linear program model
Resumo:
In the present paper we concentrate on solving sequences of nonsymmetric linear systems with block structure arising from compressible flow problems. We attempt to improve the solution process by sharing part of the computational effort throughout the sequence. This is achieved by application of a cheap updating technique for preconditioners which we adapted in order to be used for our applications. Tested on three benchmark compressible flow problems, the strategy speeds up the entire computation with an acceleration being particularly pronounced in phases of instationary behavior.
Resumo:
Linear graph reduction is a simple computational model in which the cost of naming things is explicitly represented. The key idea is the notion of "linearity". A name is linear if it is only used once, so with linear naming you cannot create more than one outstanding reference to an entity. As a result, linear naming is cheap to support and easy to reason about. Programs can be translated into the linear graph reduction model such that linear names in the program are implemented directly as linear names in the model. Nonlinear names are supported by constructing them out of linear names. The translation thus exposes those places where the program uses names in expensive, nonlinear ways. Two applications demonstrate the utility of using linear graph reduction: First, in the area of distributed computing, linear naming makes it easy to support cheap cross-network references and highly portable data structures, Linear naming also facilitates demand driven migration of tasks and data around the network without requiring explicit guidance from the programmer. Second, linear graph reduction reveals a new characterization of the phenomenon of state. Systems in which state appears are those which depend on certain -global- system properties. State is not a localizable phenomenon, which suggests that our usual object oriented metaphor for state is flawed.
Resumo:
We describe a method for modeling object classes (such as faces) using 2D example images and an algorithm for matching a model to a novel image. The object class models are "learned'' from example images that we call prototypes. In addition to the images, the pixelwise correspondences between a reference prototype and each of the other prototypes must also be provided. Thus a model consists of a linear combination of prototypical shapes and textures. A stochastic gradient descent algorithm is used to match a model to a novel image by minimizing the error between the model and the novel image. Example models are shown as well as example matches to novel images. The robustness of the matching algorithm is also evaluated. The technique can be used for a number of applications including the computation of correspondence between novel images of a certain known class, object recognition, image synthesis and image compression.
Resumo:
We describe a technique for finding pixelwise correspondences between two images by using models of objects of the same class to guide the search. The object models are 'learned' from example images (also called prototypes) of an object class. The models consist of a linear combination ofsprototypes. The flow fields giving pixelwise correspondences between a base prototype and each of the other prototypes must be given. A novel image of an object of the same class is matched to a model by minimizing an error between the novel image and the current guess for the closest modelsimage. Currently, the algorithm applies to line drawings of objects. An extension to real grey level images is discussed.
Resumo:
Polydimethylsiloxane (PDMS) is the elastomer of choice to create a variety of microfluidic devices by soft lithography techniques (eg., [1], [2], [3], [4]). Accurate and reliable design, manufacture, and operation of microfluidic devices made from PDMS, require a detailed characterization of the deformation and failure behavior of the material. This paper discusses progress in a recently-initiated research project towards this goal. We have conducted large-deformation tension and compression experiments on traditional macroscale specimens, as well as microscale tension experiments on thin-film (≈ 50µm thickness) specimens of PDMS with varying ratios of monomer:curing agent (5:1, 10:1, 20:1). We find that the stress-stretch response of these materials shows significant variability, even for nominally identically prepared specimens. A non-linear, large-deformation rubber-elasticity model [5], [6] is applied to represent the behavior of PDMS. The constitutive model has been implemented in a finite-element program [7] to aid the design of microfluidic devices made from this material. As a first attempt towards the goal of estimating the non-linear material parameters for PDMS from indentation experiments, we have conducted micro-indentation experiments using a spherical indenter-tip, and carried out corresponding numerical simulations to verify how well the numerically-predicted P(load-h(depth of indentation) curves compare with the corresponding experimental measurements. The results are encouraging, and show the possibility of estimating the material parameters for PDMS from relatively simple micro-indentation experiments, and corresponding numerical simulations.
Resumo:
Sediment composition is mainly controlled by the nature of the source rock(s), and chemical (weathering) and physical processes (mechanical crushing, abrasion, hydrodynamic sorting) during alteration and transport. Although the factors controlling these processes are conceptually well understood, detailed quantification of compositional changes induced by a single process are rare, as are examples where the effects of several processes can be distinguished. The present study was designed to characterize the role of mechanical crushing and sorting in the absence of chemical weathering. Twenty sediment samples were taken from Alpine glaciers that erode almost pure granitoid lithologies. For each sample, 11 grain-size fractions from granules to clay (ø grades <-1 to >9) were separated, and each fraction was analysed for its chemical composition. The presence of clear steps in the box-plots of all parts (in adequate ilr and clr scales) against ø is assumed to be explained by typical crystal size ranges for the relevant mineral phases. These scatter plots and the biplot suggest a splitting of the full grain size range into three groups: coarser than ø=4 (comparatively rich in SiO2, Na2O, K2O, Al2O3, and dominated by “felsic” minerals like quartz and feldspar), finer than ø=8 (comparatively rich in TiO2, MnO, MgO, Fe2O3, mostly related to “mafic” sheet silicates like biotite and chlorite), and intermediate grains sizes (4≤ø <8; comparatively rich in P2O5 and CaO, related to apatite, some feldspar). To further test the absence of chemical weathering, the observed compositions were regressed against three explanatory variables: a trend on grain size in ø scale, a step function for ø≥4, and another for ø≥8. The original hypothesis was that the trend could be identified with weathering effects, whereas each step function would highlight those minerals with biggest characteristic size at its lower end. Results suggest that this assumption is reasonable for the step function, but that besides weathering some other factors (different mechanical behavior of minerals) have also an important contribution to the trend. Key words: sediment, geochemistry, grain size, regression, step function
Resumo:
1. We studied a reintroduced population of the formerly critically endangered Mauritius kestrel Falco punctatus Temmink from its inception in 1987 until 2002, by which time the population had attained carrying capacity for the study area. Post-1994 the population received minimal management other than the provision of nestboxes. 2. We analysed data collected on survival (1987-2002) using program MARK to explore the influence of density-dependent and independent processes on survival over the course of the population's development. 3.We found evidence for non-linear, threshold density dependence in juvenile survival rates. Juvenile survival was also strongly influenced by climate, with the temporal distribution of rainfall during the cyclone season being the most influential climatic variable. Adult survival remained constant throughout. 4. Our most parsimonious capture-mark-recapture statistical model, which was constrained by density and climate, explained 75.4% of the temporal variation exhibited in juvenile survival rates over the course of the population's development. 5. This study is an example of how data collected as part of a threatened species recovery programme can be used to explore the role and functional form of natural population regulatory processes. With the improvements in conservation management techniques and the resulting success stories, formerly threatened species offer unique opportunities to further our understanding of the fundamental principles of population ecology.
Resumo:
The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.
Resumo:
The structure of turbulence in the ocean surface layer is investigated using a simplified semi-analytical model based on rapid-distortion theory. In this model, which is linear with respect to the turbulence, the flow comprises a mean Eulerian shear current, the Stokes drift of an irrotational surface wave, which accounts for the irreversible effect of the waves on the turbulence, and the turbulence itself, whose time evolution is calculated. By analysing the equations of motion used in the model, which are linearised versions of the Craik–Leibovich equations containing a ‘vortex force’, it is found that a flow including mean shear and a Stokes drift is formally equivalent to a flow including mean shear and rotation. In particular, Craik and Leibovich’s condition for the linear instability of the first kind of flow is equivalent to Bradshaw’s condition for the linear instability of the second. However, the present study goes beyond linear stability analyses by considering flow disturbances of finite amplitude, which allows calculating turbulence statistics and addressing cases where the linear stability is neutral. Results from the model show that the turbulence displays a structure with a continuous variation of the anisotropy and elongation, ranging from streaky structures, for distortion by shear only, to streamwise vortices resembling Langmuir circulations, for distortion by Stokes drift only. The TKE grows faster for distortion by a shear and a Stokes drift gradient with the same sign (a situation relevant to wind waves), but the turbulence is more isotropic in that case (which is linearly unstable to Langmuir circulations).
Resumo:
An analytical model of orographic gravity wave drag due to sheared flow past elliptical mountains is developed. The model extends the domain of applicability of the well-known Phillips model to wind profiles that vary relatively slowly in the vertical, so that they may be treated using a WKB approximation. The model illustrates how linear processes associated with wind profile shear and curvature affect the drag force exerted by the airflow on mountains, and how it is crucial to extend the WKB approximation to second order in the small perturbation parameter for these effects to be taken into account. For the simplest wind profiles, the normalized drag depends only on the Richardson number, Ri, of the flow at the surface and on the aspect ratio, γ, of the mountain. For a linear wind profile, the drag decreases as Ri decreases, and this variation is faster when the wind is across the mountain than when it is along the mountain. For a wind that rotates with height maintaining its magnitude, the drag generally increases as Ri decreases, by an amount depending on γ and on the incidence angle. The results from WKB theory are compared with exact linear results and also with results from a non-hydrostatic nonlinear numerical model, showing in general encouraging agreement, down to values of Ri of order one.
Resumo:
Existing numerical characterizations of the optimal income tax have been based on a limited number of model specifications. As a result, they do not reveal which properties are general. We determine the optimal tax in the quasi-linear model under weaker assumptions than have previously been used; in particular, we remove the assumption of a lower bound on the utility of zero consumption and the need to permit negative labor incomes. A Monte Carlo analysis is then conducted in which economies are selected at random and the optimal tax function constructed. The results show that in a significant proportion of economies the marginal tax rate rises at low skills and falls at high. The average tax rate is equally likely to rise or fall with skill at low skill levels, rises in the majority of cases in the centre of the skill range, and falls at high skills. These results are consistent across all the specifications we test. We then extend the analysis to show that these results also hold for Cobb-Douglas utility.