930 resultados para Linear optimization approach
Resumo:
This paper addresses the problem of obtaining 3d detailed reconstructions of human faces in real-time and with inexpensive hardware. We present an algorithm based on a monocular multi-spectral photometric-stereo setup. This system is known to capture high-detailed deforming 3d surfaces at high frame rates and without having to use any expensive hardware or synchronized light stage. However, the main challenge of such a setup is the calibration stage, which depends on the lights setup and how they interact with the specific material being captured, in this case, human faces. For this purpose we develop a self-calibration technique where the person being captured is asked to perform a rigid motion in front of the camera, maintaining a neutral expression. Rigidity constrains are then used to compute the head's motion with a structure-from-motion algorithm. Once the motion is obtained, a multi-view stereo algorithm reconstructs a coarse 3d model of the face. This coarse model is then used to estimate the lighting parameters with a stratified approach: In the first step we use a RANSAC search to identify purely diffuse points on the face and to simultaneously estimate this diffuse reflectance model. In the second step we apply non-linear optimization to fit a non-Lambertian reflectance model to the outliers of the previous step. The calibration procedure is validated with synthetic and real data.
Resumo:
Virtual machines (VMs) are powerful platforms for building agile datacenters and emerging cloud systems. However, resource management for a VM-based system is still a challenging task. First, the complexity of application workloads as well as the interference among competing workloads makes it difficult to understand their VMs’ resource demands for meeting their Quality of Service (QoS) targets; Second, the dynamics in the applications and system makes it also difficult to maintain the desired QoS target while the environment changes; Third, the transparency of virtualization presents a hurdle for guest-layer application and host-layer VM scheduler to cooperate and improve application QoS and system efficiency. This dissertation proposes to address the above challenges through fuzzy modeling and control theory based VM resource management. First, a fuzzy-logic-based nonlinear modeling approach is proposed to accurately capture a VM’s complex demands of multiple types of resources automatically online based on the observed workload and resource usages. Second, to enable fast adaption for resource management, the fuzzy modeling approach is integrated with a predictive-control-based controller to form a new Fuzzy Modeling Predictive Control (FMPC) approach which can quickly track the applications’ QoS targets and optimize the resource allocations under dynamic changes in the system. Finally, to address the limitations of black-box-based resource management solutions, a cross-layer optimization approach is proposed to enable cooperation between a VM’s host and guest layers and further improve the application QoS and resource usage efficiency. The above proposed approaches are prototyped and evaluated on a Xen-based virtualized system and evaluated with representative benchmarks including TPC-H, RUBiS, and TerraFly. The results demonstrate that the fuzzy-modeling-based approach improves the accuracy in resource prediction by up to 31.4% compared to conventional regression approaches. The FMPC approach substantially outperforms the traditional linear-model-based predictive control approach in meeting application QoS targets for an oversubscribed system. It is able to manage dynamic VM resource allocations and migrations for over 100 concurrent VMs across multiple hosts with good efficiency. Finally, the cross-layer optimization approach further improves the performance of a virtualized application by up to 40% when the resources are contended by dynamic workloads.
Resumo:
We develop a framework for proving approximation limits of polynomial size linear programs (LPs) from lower bounds on the nonnegative ranks of suitably defined matrices. This framework yields unconditional impossibility results that are applicable to any LP as opposed to only programs generated by hierarchies. Using our framework, we prove that O(n1/2-ε)-approximations for CLIQUE require LPs of size 2nΩ(ε). This lower bound applies to LPs using a certain encoding of CLIQUE as a linear optimization problem. Moreover, we establish a similar result for approximations of semidefinite programs by LPs. Our main technical ingredient is a quantitative improvement of Razborov's [38] rectangle corruption lemma for the high error regime, which gives strong lower bounds on the nonnegative rank of shifts of the unique disjointness matrix.
Resumo:
Over the last decade, success of social networks has significantly reshaped how people consume information. Recommendation of contents based on user profiles is well-received. However, as users become dominantly mobile, little is done to consider the impacts of the wireless environment, especially the capacity constraints and changing channel. In this dissertation, we investigate a centralized wireless content delivery system, aiming to optimize overall user experience given the capacity constraints of the wireless networks, by deciding what contents to deliver, when and how. We propose a scheduling framework that incorporates content-based reward and deliverability. Our approach utilizes the broadcast nature of wireless communication and social nature of content, by multicasting and precaching. Results indicate this novel joint optimization approach outperforms existing layered systems that separate recommendation and delivery, especially when the wireless network is operating at maximum capacity. Utilizing limited number of transmission modes, we significantly reduce the complexity of the optimization. We also introduce the design of a hybrid system to handle transmissions for both system recommended contents ('push') and active user requests ('pull'). Further, we extend the joint optimization framework to the wireless infrastructure with multiple base stations. The problem becomes much harder in that there are many more system configurations, including but not limited to power allocation and how resources are shared among the base stations ('out-of-band' in which base stations transmit with dedicated spectrum resources, thus no interference; and 'in-band' in which they share the spectrum and need to mitigate interference). We propose a scalable two-phase scheduling framework: 1) each base station obtains delivery decisions and resource allocation individually; 2) the system consolidates the decisions and allocations, reducing redundant transmissions. Additionally, if the social network applications could provide the predictions of how the social contents disseminate, the wireless networks could schedule the transmissions accordingly and significantly improve the dissemination performance by reducing the delivery delay. We propose a novel method utilizing: 1) hybrid systems to handle active disseminating requests; and 2) predictions of dissemination dynamics from the social network applications. This method could mitigate the performance degradation for content dissemination due to wireless delivery delay. Results indicate that our proposed system design is both efficient and easy to implement.
Resumo:
Dissertação de mest. em Engenharia de Sistemas e Computação - Área de Sistemas de Controlo, Faculdade de Ciências e Tecnologia, Univ.do Algarve, 2001
Resumo:
This paper presents a stochastic mixed-integer linear programming approach for solving the self-scheduling problem of a price-taker thermal and wind power producer taking part in a pool-based electricity market. Uncertainty on electricity price and wind power is considered through a set of scenarios. Thermal units are modeled by variable costs, start-up costs and technical operating constraints, such as: ramp up/down limits and minimum up/down time limits. An efficient mixed-integer linear program is presented to develop the offering strategies of the coordinated production of thermal and wind energy generation, aiming to maximize the expected profit. A case study with data from the Iberian Electricity Market is presented and results are discussed to show the effectiveness of the proposed approach.
Resumo:
This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent
Resumo:
Introduction: Eccentric exercise (EE) is a commonly used treatment for Achilles tendinopathy. While vibrations in the 8–12 Hz frequency range generated during eccentric muscle actions have been put forward as a potential mechanism for the beneficial effect of EE, optimal loading parameters required to expedite recovery are currently unknown. Alfredson's original protocol employed 90 repetitions of eccentric loading, however abbreviated protocols consisting of fewer repetitions (typically 45) have been developed, albeit with less beneficial effect. Given that 8–12 Hz vibrations generated during isometric muscle actions have been previously shown to increase with fatigue, this research evaluated the effect of exercise repetition on motor output vibrations generated during EE by investigating the frequency characteristics of ground reaction force (GRF) recorded throughout the 90 repetitions of Alfredson's protocol. Methods: Nine healthy adult males performed six sets (15 repetitions per set) of eccentric ankle exercise. GRF was recorded at a frequency of 1000 Hz throughout the exercise protocol. The frequency power spectrum of the resultant GRF was calculated and normalized to total power. Relative spectral power was summed over 1 Hz widows within the frequency rage 7.5–11.5 Hz. The effect of each additional exercise set (15 repetitions) on the relative power within each widow was investigated using a general linear modelling approach. Results: The magnitude of peak relative power within the 7.5–11.5 Hz bandwidth increased across the six exercise sets from 0.03 in exercise set one to 0.12 in exercise set six (P < 0.05). Following the 4th set of exercise the frequency at which peak relative power occurred shifted from 9 to 10 Hz. Discussion: This study has demonstrated that successive repetitions of eccentric loading over six exercise sets results in an increase in the amplitude of motor output vibrations in the 7.5–11.5 Hz bandwidth, with an increase in the frequency of these vibrations occurring after the 4th set (60th repetition). These findings are consistent with findings from previous studies of muscle fatigue. Assuming that the magnitude and frequency of these vibrations represent important stimuli for tendon remodelling as hypothesized within the literature, the findings of this study question the role of abbreviated EE protocols and raise the question; can EE protocols for tendinopathy be optimized by performing eccentric loading to fatigue?
Resumo:
Identifying railway capacity is an important task that can identify "in principal" whether the network can handle an intended traffic flow, and whether there is any free capacity left for additional train services. Capacity determination techniques can also be used to identify how best to improve an existing network, and at least cost. In this article an optimization approach has been applied to a case study of the Iran national railway, in order to identify its current capacity and to optimally expand it given a variety of technical conditions. This railway is very important in Iran and will be upgraded extensively in the coming years. Hence the conclusions in this article may help in that endeavor. A sensitivity analysis is recommended to evaluate a wider range of possible scenarios. Hence more useful lower and upper bounds can be provided for the performance of the system
Resumo:
Conservation decision tools based on cost-effectiveness analysis are used to assess threat management strategies for improving species persistence. These approaches rank alternative strategies by their benefit to cost ratio but may fail to identify the optimal sets of strategies to implement under limited budgets because they do not account for redundancies. We devised a multi objective optimization approach in which the complementarity principle is applied to identify the sets of threat management strategies that protect the most species for any budget. We used our approach to prioritize threat management strategies for 53 species of conservation concern in the Pilbara, Australia. We followed a structured elicitation approach to collect information on the benefits and costs of implementing 17 different conservation strategies during a 3-day workshop with 49 stakeholders and experts in the biodiversity, conservation, and management of the Pilbara. We compared the performance of our complementarity priority threat management approach with a current cost-effectiveness ranking approach. A complementary set of 3 strategies: domestic herbivore management, fire management and research, and sanctuaries provided all species with >50% chance of persistence for $4.7 million/year over 20 years. Achieving the same result cost almost twice as much ($9.71 million/year) when strategies were selected by their cost-effectiveness ranks alone. Our results show that complementarity of management benefits has the potential to double the impact of priority threat management approaches.
Resumo:
Single layered transition metal dichalcogenides have attracted tremendous research interest due to their structural phase diversities. By using a global optimization approach, we have discovered a new phase of transition metal dichalcogenides (labelled as T′′), which is confirmed to be energetically, dynamically and kinetically stable by our first-principles calculations. The new T′′ MoS2 phase exhibits an intrinsic quantum spin Hall (QSH) effect with a nontrivial gap as large as 0.42 eV, suggesting that a two-dimensional (2D) topological insulator can be achieved at room temperature. Most interestingly, there is a topological phase transition simply driven by a small tensile strain of up to 2%. Furthermore, all the known MX2 (M = Mo or W; X = S, Se or Te) monolayers in the new T′′ phase unambiguously display similar band topologies and strain controlled topological phase transitions. Our findings greatly enrich the 2D families of transition metal dichalcogenides and offer a feasible way to control the electronic states of 2D topological insulators for the fabrication of high-speed spintronics devices.
Resumo:
Submergence of land is a major impact of large hydropower projects. Such projects are often also dogged by siltation, delays in construction and heavy debt burdens-factors that are not considered in the project planning exercise. A simple constrained optimization model for the benefit~ost analysis of large hydropower projects that considers these features is proposed. The model is then applied to two sites in India. Using the potential productivity of an energy plantation on the submergible land is suggested as a reasonable approach to estimating the opportunity cost of submergence. Optimum project dimensions are calculated for various scenarios. Results indicate that the inclusion of submergence cost may lead to a substanual reduction in net present value and hence in project viability. Parameters such as project lifespan, con$truction time, discount rate and external debt burden are also of significance. The designs proposed by the planners are found to be uneconomic, whIle even the optimal design may not be viable for more typical scenarios. The concept of energy opportunity cost is useful for preliminary screening; some projects may require more detailed calculations. The optimization approach helps identify significant trade-offs between energy generation and land availability.
Resumo:
Methodologies are presented for minimization of risk in a river water quality management problem. A risk minimization model is developed to minimize the risk of low water quality along a river in the face of conflict among various stake holders. The model consists of three parts: a water quality simulation model, a risk evaluation model with uncertainty analysis and an optimization model. Sensitivity analysis, First Order Reliability Analysis (FORA) and Monte-Carlo simulations are performed to evaluate the fuzzy risk of low water quality. Fuzzy multiobjective programming is used to formulate the multiobjective model. Probabilistic Global Search Laussane (PGSL), a global search algorithm developed recently, is used for solving the resulting non-linear optimization problem. The algorithm is based on the assumption that better sets of points are more likely to be found in the neighborhood of good sets of points, therefore intensifying the search in the regions that contain good solutions. Another model is developed for risk minimization, which deals with only the moments of the generated probability density functions of the water quality indicators. Suitable skewness values of water quality indicators, which lead to low fuzzy risk are identified. Results of the models are compared with the results of a deterministic fuzzy waste load allocation model (FWLAM), when methodologies are applied to the case study of Tunga-Bhadra river system in southern India, with a steady state BOD-DO model. The fractional removal levels resulting from the risk minimization model are slightly higher, but result in a significant reduction in risk of low water quality. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Linear optimization model was used to calculate seven wood procurement scenarios for years 1990, 2000 and 2010. Productivity and cost functions for seven cutting, five terrain transport, three long distance transport and various work supervision and scaling methods were calculated from available work study reports. All method's base on Nordic cut to length system. Finland was divided in three parts for description of harvesting conditions. Twenty imaginary wood processing points and their wood procurement areas were created for these areas. The procurement systems, which consist of the harvesting conditions and work productivity functions, were described as a simulation model. In the LP-model the wood procurement system has to fulfil the volume and wood assortment requirements of processing points by minimizing the procurement cost. The model consists of 862 variables and 560 restrictions. Results show that it is economical to increase the mechanical work in harvesting. Cost increment alternatives effect only little on profitability of manual work. The areas of later thinnings and seed tree- and shelter wood cuttings increase on cost of first thinnings. In mechanized work one method, 10-tonne one grip harvester and forwarder, is gaining advantage among other methods. Working hours of forwarder are decreasing opposite to the harvester. There is only little need to increase the number of harvesters and trucks or their drivers from today's level. Quite large fluctuations in level of procurement and cost can be handled by constant number of machines, by alternating the number of season workers and by driving machines in two shifts. It is possible, if some environmental problems of large scale summer time harvesting can be solved.
Resumo:
A new linear algebraic approach for identification of a nonminimum phase FIR system of known order using only higher order (>2) cumulants of the output process is proposed. It is first shown that a matrix formed from a set of cumulants of arbitrary order can be expressed as a product of structured matrices. The subspaces of this matrix are then used to obtain the parameters of the FIR system using a set of linear equations. Theoretical analysis and numerical simulation studies are presented to characterize the performance of the proposed methods.