45 resultados para Unconstrained minimization
Resumo:
Handling appearance variations is a very challenging problem for visual tracking. Existing methods usually solve this problem by relying on an effective appearance model with two features: (1) being capable of discriminating the tracked target from its background, (2) being robust to the target's appearance variations during tracking. Instead of integrating the two requirements into the appearance model, in this paper, we propose a tracking method that deals with these problems separately based on sparse representation in a particle filter framework. Each target candidate defined by a particle is linearly represented by the target and background templates with an additive representation error. Discriminating the target from its background is achieved by activating the target templates or the background templates in the linear system in a competitive manner. The target's appearance variations are directly modeled as the representation error. An online algorithm is used to learn the basis functions that sparsely span the representation error. The linear system is solved via ℓ1 minimization. The candidate with the smallest reconstruction error using the target templates is selected as the tracking result. We test the proposed approach using four sequences with heavy occlusions, large pose variations, drastic illumination changes and low foreground-background contrast. The proposed approach shows excellent performance in comparison with two latest state-of-the-art trackers.
Resumo:
Chemoenzymatic dynamic kinetic resolution (DKR) of rac-1-phenyl ethanol into R-1-phenylethanol acetate was investigated with emphasis on the minimization of side reactions. The organometallic hydrogen transfer (racemization) catalyst was varied, and this was observed to alter the rate and extent of oxidation of the alcohol to form ketone side products. The performance of highly active catalyst [(pentamethylcyclopentadienyl) IrCl2(1-benzyl,3-methyl-imidazol-2-ylidene)] was found to depend on the batch of lipase B used. The interaction between the bio- and chemo-catalysts was reduced by employing physical entrapment of the enzyme in silica using a sol-gel process. The nature of the gelation method was found to be important, with an alkaline method preferred, as an acidic method was found to initiate a further side reaction, the acid catalyzed dehydration of the secondary alcohol. The acidic gel was found to be a heterogeneous solid acid.
Resumo:
This paper introduces the discrete choice model-paradigm of Random Regret Minimisation (RRM) to the field of health economics. The RRM is a regret-based model that explores a driver of choice different from the traditional utility-based Random Utility Maximisation (RUM). The RRM approach is based on the idea that, when choosing, individuals aim to minimise their regret–regret being defined as what one experiences when a non-chosen alternative in a choice set performs better than a chosen one in relation to one or more attributes. Analysing data from a discrete choice experiment on diet, physical activity and risk of a fatal heart attack in the next ten years administered to a sample of the Northern Ireland population, we find that the combined use of RUM and RRM models offer additional information, providing useful behavioural insights for better informed policy appraisal.
Resumo:
Novice and expert jugglers employ different visuomotor strategies: whereas novices look at the balls around their zeniths, experts tend to fixate their gaze at a central location within the pattern (so-called gaze-through). A gaze-through strategy may reflect visuomotor parsimony, i.e., the use of simpler visuomotor (oculomotor and/or attentional) strategies as afforded by superior tossing accuracy and error corrections. In addition, the more stable gaze during a gaze-through strategy may result in more accurate movement planning by providing a stable base for gaze-centered neural coding of ball motion and movement plans or for shifts in attention. To determine whether a stable gaze might indeed have such beneficial effects on juggling, we examined juggling variability during 3-ball cascade juggling with and without constrained gaze fixation (at various depths) in expert performers (n = 5). Novice jugglers were included (n = 5) for comparison, even though our predictions pertained specifically to expert juggling. We indeed observed that experts, but not novices, juggled significantly less variable when fixating, compared to unconstrained viewing. Thus, while visuomotor parsimony might still contribute to the emergence of a gaze-through strategy, this study highlights an additional role for improved movement planning. This role may be engendered by gaze-centered coding and/or attentional control mechanisms in the brain.
The size and shape of shells used by hermit crabs: A multivariate analysis of Clibanarius erythropus
Resumo:
Shell attributes Such as weight and shape affect the reproduction, growth, predator avoidance and behaviour of several hermit crab species. Although the importance of these attributes has been extensively investigated, it is still difficult to assess the relative role of size and shape. Multivariate techniques allow concise and efficient quantitative analysis of these multidimensional properties, and this paper aims to understand their role in determining patterns of hermit crab shell use. To this end, a multivariate approach based on a combination of size-unconstrained (shape) PCA and RDA ordination was used to model the biometrics of southern Mediterranean Clibanarius erythropus Populations and their shells. Patterns of shell utilization and morphological gradients demonstrate that size is more important than shape, probably due to the limited availability of empty shells in the environment. The shape (e.g. the degree of shell elongation) and weight of inhabited shells vary considerably in both female and male crabs. However, these variations are clearly accounted for by crab biometrics in males only. Oil the basis of statistical evidence and findings from past studies. it is hypothesized that larger males of adequate size and strength have access to the larger, heavier and relatively more available shells of the globose Osilinus turbinatus, which cannot be used by average-sized males or by females investing energy in egg production. This greater availability allows larger males to select more Suitable Shapes. (C) 2009 Elsevier Masson SAS. All rights reserved.
Resumo:
Drilling is a major process in the manufacturing of holes required for the assemblies of composite laminates in aerospace industry. Simulation of drilling process is an effective method in optimizing the drill geometry and process parameters in order to improve hole quality and to reduce the drill wear. In this research we have developed three-dimensional (3D) FE model for drilling CFRP. A 3D progressive intra-laminar failure model based on the Hashin's theory is considered. Also an inter-laminar delamination model which includes the onset and growth of delamination by using cohesive contact zone is developed. The developed model with inclusion of the improved delamination model and real drill geometry is used to make comparison between the step drill of different stage ratio and twist drill. Thrust force, torque and work piece stress distributions are estimated to decrease by the use of step drill with high stage ratio. The model indicates that delamination and other workpiece defects could be controlled by selection of suitable step drill geometry. Hence the 3D model could be used as a design tool for drill geometry for minimization of delamination in CFRP drilling. © 2013 Elsevier Ltd.
Resumo:
We present an implementation of quantum annealing (QA) via lattice Green's function Monte Carlo (GFMC), focusing on its application to the Ising spin glass in transverse field. In particular, we study whether or not such a method is more effective than the path-integral Monte Carlo- (PIMC) based QA, as well as classical simulated annealing (CA), previously tested on the same optimization problem. We identify the issue of importance sampling, i.e., the necessity of possessing reasonably good (variational) trial wave functions, as the key point of the algorithm. We performed GFMC-QA runs using such a Boltzmann-type trial wave function, finding results for the residual energies that are qualitatively similar to those of CA (but at a much larger computational cost), and definitely worse than PIMC-QA. We conclude that, at present, without a serious effort in constructing reliable importance sampling variational wave functions for a quantum glass, GFMC-QA is not a true competitor of PIMC-QA.
Resumo:
We present results for a variety of Monte Carlo annealing approaches, both classical and quantum, benchmarked against one another for the textbook optimization exercise of a simple one-dimensional double well. In classical (thermal) annealing, the dependence upon the move chosen in a Metropolis scheme is studied and correlated with the spectrum of the associated Markov transition matrix. In quantum annealing, the path integral Monte Carlo approach is found to yield nontrivial sampling difficulties associated with the tunneling between the two wells. The choice of fictitious quantum kinetic energy is also addressed. We find that a "relativistic" kinetic energy form, leading to a higher probability of long real-space jumps, can be considerably more effective than the standard nonrelativistic one.
Resumo:
In this paper, I critically assess John Rawls' repeated claim that the duty of civility is only a moral duty and should not be enforced by law. In the first part of the paper, I examine and reject the view that Rawls' position may be due to the practical difficulties that the legal enforcement of the duty of civility might entail. I thus claim that Rawls' position must be driven by deeper normative reasons grounded in a conception of free speech. In the second part of the paper, I therefore examine various arguments for free speech and critically assess whether they are consistent with Rawls' political liberalism. I first focus on the arguments from truth and self-fulfilment. Both arguments, I argue, rely on comprehensive doctrines and therefore cannot provide a freestanding political justification for free speech. Freedom of speech, I claim, can be justified instead on the basis of Rawls' political conception of the person and of the two moral powers. However, Rawls' wide view of public reason already allows scope for the kind of free speech necessary for the exercise of the two moral powers and therefore cannot explain Rawls' opposition to the legal enforcement of the duty of civility. Such opposition, I claim, can only be explained on the basis of a defence of unconstrained freedom of speech grounded in the ideas of democracy and political legitimacy. Yet, I conclude, while public reason and the duty of civility are essential to political liberalism, unconstrained freedom of speech is not. Rawls and political liberals could therefore renounce unconstrained freedom of speech, and endorse the legal enforcement of the duty of civility, while remaining faithful to political liberalism.
Resumo:
Many graph datasets are labelled with discrete and numeric attributes. Most frequent substructure discovery algorithms ignore numeric attributes; in this paper we show how they can be used to improve search performance and discrimination. Our thesis is that the most descriptive substructures are those which are normative both in terms of their structure and in terms of their numeric values. We explore the relationship between graph structure and the distribution of attribute values and propose an outlier-detection step, which is used as a constraint during substructure discovery. By pruning anomalous vertices and edges, more weight is given to the most descriptive substructures. Our method is applicable to multi-dimensional numeric attributes; we outline how it can be extended for high-dimensional data. We support our findings with experiments on transaction graphs and single large graphs from the domains of physical building security and digital forensics, measuring the effect on runtime, memory requirements and coverage of discovered patterns, relative to the unconstrained approach.
Resumo:
Due to increasing water scarcity, accelerating industrialization and urbanization, efficiency of irrigation water use in Northern China needs urgent improvement. Based on a sample of 347 wheat growers in the Guanzhong Plain, this paper simultaneously estimates a production function, and its corresponding first-order conditions for cost minimization, to analyze efficiency of irrigation water use. The main findings are that average technical, allocative, and overall economic efficiency are 0.35, 0.86 and 0.80, respectively. In a second stage analysis, we find that farmers’ perception of water scarcity, water price and irrigation infrastructure increase irrigation water allocative efficiency, while land fragmentation decreases it. We also show that farmers’ income loss due to higher water prices can be offset by increasing irrigation water use efficiency.
Resumo:
Sparse representation based visual tracking approaches have attracted increasing interests in the community in recent years. The main idea is to linearly represent each target candidate using a set of target and trivial templates while imposing a sparsity constraint onto the representation coefficients. After we obtain the coefficients using L1-norm minimization methods, the candidate with the lowest error, when it is reconstructed using only the target templates and the associated coefficients, is considered as the tracking result. In spite of promising system performance widely reported, it is unclear if the performance of these trackers can be maximised. In addition, computational complexity caused by the dimensionality of the feature space limits these algorithms in real-time applications. In this paper, we propose a real-time visual tracking method based on structurally random projection and weighted least squares techniques. In particular, to enhance the discriminative capability of the tracker, we introduce background templates to the linear representation framework. To handle appearance variations over time, we relax the sparsity constraint using a weighed least squares (WLS) method to obtain the representation coefficients. To further reduce the computational complexity, structurally random projection is used to reduce the dimensionality of the feature space while preserving the pairwise distances between the data points in the feature space. Experimental results show that the proposed approach outperforms several state-of-the-art tracking methods.
Resumo:
Background: High risk medications are commonly prescribed to older US patients. Currently, less is known about high risk medication prescribing in other Western Countries, including the UK. We measured trends and correlates of high risk medication prescribing in a subset of the older UK population (community/institutionalized) to inform harm minimization efforts. Methods: Three cross-sectional samples from primary care electronic clinical records (UK Clinical Practice Research Datalink, CPRD) in fiscal years 2003/04, 2007/08 and 2011/12 were taken. This yielded a sample of 13,900 people aged 65 years or over from 504 UK general practices. High risk medications were defined by 2012 Beers Criteria adapted for the UK. Using descriptive statistical methods and regression modelling, prevalence of ‘any’ (drugs prescribed at least once per year) and ‘long-term’ (drugs prescribed all quarters of year) high risk medication prescribing and correlates were determined. Results: While polypharmacy rates have risen sharply, high risk medication prevalence has remained stable across a decade. A third of older (65+) people are exposed to high risk medications, but only half of the total prevalence was long-term (any = 38.4 % [95 % CI: 36.3, 40.5]; long-term = 17.4 % [15.9, 19.9] in 2011/12). Long-term but not any high risk medication exposure was associated with older ages (85 years or over). Women and people with higher polypharmacy burden were at greater risk of exposure; lower socio-economic status was not associated. Ten drugs/drug classes accounted for most of high risk medication prescribing in 2011/12. Conclusions: High risk medication prescribing has not increased over time against a background of increasing polypharmacy in the UK. Half of patients receiving high risk medications do so for less than a year. Reducing or optimising the use of a limited number of drugs could dramatically reduce high risk medications in older people. Further research is needed to investigate why the oldest old and women are at greater risk. Interventions to reduce high risk medications may need to target shorter and long-term use separately.
Resumo:
This paper presents data from the English Channel area of Britain and Northern France on the spatial distribution of Lower to early Middle Palaeolithic pre-MIS5 interglacial sites which are used to test the contention that the pattern of the richest sites is a real archaeological distribution and not of taphonomic origin. These sites show a marked concentration in the middle-lower reaches of river valleys with most being upstream of, but close to, estimated interglacial tidal limits. A plant and animal database derived from Middle-Late Pleistocene sites in the region is used to estimate the potentially edible foods and their distribution in the typically undulating landscape of the region. This is then converted into the potential availability of macronutrients (proteins, carbohydrates, fats) and selected micronutrients. The floodplain is shown to be the optimum location in the nutritional landscape (nutriscape). In addition to both absolute and seasonal macronutrient advantages the floodplains could have provided foods rich in key micronutrients, which are linked to better health, the maintenance of fertility and minimization of infant mortality. Such places may have been seen as ‘good (or healthy) places’ explaining the high number of artefacts accumulated by repeated visitation over long periods of time and possible occupation. The distribution of these sites reflects the richest aquatic and wetland successional habitats along valley floors. Such locations would have provided foods rich in a wide range of nutrients, importantly including those in short supply at these latitudes. When combined with other benefits, the high nutrient diversity made these locations the optimal niche in northwest European mixed temperate woodland environments. It is argued here that the use of these nutritionally advantageous locations as nodal or central points facilitated a healthy variant of the Palaeolithic diet which permitted habitation at the edge of these hominins’ range.
Resumo:
Energy consumption is an important concern in modern multicore processors. The energy consumed by a multicore processor during the execution of an application can be minimized by tuning the hardware state utilizing knobs such as frequency, voltage etc. The existing theoretical work on energy minimization using Global DVFS (Dynamic Voltage and Frequency Scaling), despite being thorough, ignores the time and the energy consumed by the CPU on memory accesses and the dynamic energy consumed by the idle cores. This article presents an analytical energy-performance model for parallel workloads that accounts for the time and the energy consumed by the CPU chip on memory accesses in addition to the time and energy consumed by the CPU on CPU instructions. In addition, the model we present also accounts for the dynamic energy consumed by the idle cores. The existing work on global DVFS for parallel workloads shows that using a single frequency for the entire duration of a parallel application is not energy optimal and that varying the frequency according to the changes in the parallelism of the workload can save energy. We present an analytical framework around our energy-performance model to predict the operating frequencies (that depend upon the amount of parallelism) for global DVFS that minimize the overall CPU energy consumption. We show how the optimal frequencies in our model differ from the optimal frequencies in a model that does not account for memory accesses. We further show how the memory intensity of an application affects the optimal frequencies.