906 resultados para Slot-based task-splitting algorithms


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we consider the problem of selecting, for any given positive integer k, the top-k nodes in a social network, based on a certain measure appropriate for the social network. This problem is relevant in many settings such as analysis of co-authorship networks, diffusion of information, viral marketing, etc. However, in most situations, this problem turns out to be NP-hard. The existing approaches for solving this problem are based on approximation algorithms and assume that the objective function is sub-modular. In this paper, we propose a novel and intuitive algorithm based on the Shapley value, for efficiently computing an approximate solution to this problem. Our proposed algorithm does not use the sub-modularity of the underlying objective function and hence it is a general approach. We demonstrate the efficacy of the algorithm using a co-authorship data set from e-print arXiv (www.arxiv.org), having 8361 authors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Facet-based sentiment analysis involves discovering the latent facets, sentiments and their associations. Traditional facet-based sentiment analysis algorithms typically perform the various tasks in sequence, and fail to take advantage of the mutual reinforcement of the tasks. Additionally,inferring sentiment levels typically requires domain knowledge or human intervention. In this paper, we propose aseries of probabilistic models that jointly discover latent facets and sentiment topics, and also order the sentiment topics with respect to a multi-point scale, in a language and domain independent manner. This is achieved by simultaneously capturing both short-range syntactic structure and long range semantic dependencies between the sentiment and facet words. The models further incorporate coherence in reviews, where reviewers dwell on one facet or sentiment level before moving on, for more accurate facet and sentiment discovery. For reviews which are supplemented with ratings, our models automatically order the latent sentiment topics, without requiring seed-words or domain-knowledge. To the best of our knowledge, our work is the first attempt to combine the notions of syntactic and semantic dependencies in the domain of review mining. Further, the concept of facet and sentiment coherence has not been explored earlier either. Extensive experimental results on real world review data show that the proposed models outperform various state of the art baselines for facet-based sentiment analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We are at the cusp of a historic transformation of both communication system and electricity system. This creates challenges as well as opportunities for the study of networked systems. Problems of these systems typically involve a huge number of end points that require intelligent coordination in a distributed manner. In this thesis, we develop models, theories, and scalable distributed optimization and control algorithms to overcome these challenges.

This thesis focuses on two specific areas: multi-path TCP (Transmission Control Protocol) and electricity distribution system operation and control. Multi-path TCP (MP-TCP) is a TCP extension that allows a single data stream to be split across multiple paths. MP-TCP has the potential to greatly improve reliability as well as efficiency of communication devices. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new algorithm Balia (balanced linked adaptation) which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new proposed algorithm Balia with existing MP-TCP algorithms.

Our second focus is on designing computationally efficient algorithms for electricity distribution system operation and control. First, we develop efficient algorithms for feeder reconfiguration in distribution networks. The feeder reconfiguration problem chooses the on/off status of the switches in a distribution network in order to minimize a certain cost such as power loss. It is a mixed integer nonlinear program and hence hard to solve. We propose a heuristic algorithm that is based on the recently developed convex relaxation of the optimal power flow problem. The algorithm is efficient and can successfully computes an optimal configuration on all networks that we have tested. Moreover we prove that the algorithm solves the feeder reconfiguration problem optimally under certain conditions. We also propose a more efficient algorithm and it incurs a loss in optimality of less than 3% on the test networks.

Second, we develop efficient distributed algorithms that solve the optimal power flow (OPF) problem on distribution networks. The OPF problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. Traditionally OPF is solved in a centralized manner. With increasing penetration of volatile renewable energy resources in distribution systems, we need faster and distributed solutions for real-time feedback control. This is difficult because power flow equations are nonlinear and kirchhoff's law is global. We propose solutions for both balanced and unbalanced radial distribution networks. They exploit recent results that suggest solving for a globally optimal solution of OPF over a radial network through a second-order cone program (SOCP) or semi-definite program (SDP) relaxation. Our distributed algorithms are based on the alternating direction method of multiplier (ADMM), but unlike standard ADMM-based distributed OPF algorithms that require solving optimization subproblems using iterative methods, the proposed solutions exploit the problem structure that greatly reduce the computation time. Specifically, for balanced networks, our decomposition allows us to derive closed form solutions for these subproblems and it speeds up the convergence by 1000x times in simulations. For unbalanced networks, the subproblems reduce to either closed form solutions or eigenvalue problems whose size remains constant as the network scales up and computation time is reduced by 100x compared with iterative methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabalho apresenta uma arquitetura geral para evolução de circuitos eletrônicos analógicos baseada em algoritmos genéticos. A organização lógica privilegia a interoperabilidade de seus principais componentes, incluindo a possibilidade de substituição ou melhorias internas de suas funcionalidades. A plataforma implementada utiliza evolução extrínseca, isto é, baseada em simulação de circuitos, e visa facilidade e flexibilidade para experimentação. Ela viabiliza a interconexão de diversos componentes aos nós de um circuito eletrônico que será sintetizado ou adaptado. A técnica de Algoritmos Genéticos é usada para buscar a melhor forma de interconectar os componentes para implementar a função desejada. Esta versão da plataforma utiliza o ambiente MATLAB com um toolbox de Algoritmos Genéticos e o PSpice como simulador de circuitos. Os estudos de caso realizados apresentaram resultados que demonstram a potencialidade da plataforma no desenvolvimento de circuitos eletrônicos adaptativos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A polarization modulator based on splitting with a Savart plate and rotation of an analyzer for a moire system with grating imaging is presented, and its modulation principle is analyzed. The polarization modulator is simple and achromatic. It is composed of a polarizer, a Savart plate, and an analyzer. The polarizer and the Savart plate are placed in front of the index grating to split the image of the scale grating in the moire system. The analyzer is placed behind the grating and rotated to realize the modulation of the moire signal. The analyzer can be rotated either continually with high speed or step by step with low speed to form different modulation modes. The polarization modulator makes the moire system insensitive to the change of initial intensity. In experiments, we verified the usefulness of the polarization modulator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a novel vision chip for high-speed target tracking. Two concise algorithms for high-speed target tracking are developed. The algorithms include some basic operations that can be used to process the real-time image information during target tracking. The vision chip is implemented that is based on the algorithms and a row-parallel architecture. A prototype chip has 64 x 64 pixels is fabricated by 0.35 pm complementary metal-oxide-semiconductor transistor (CMOS) process with 4.5 x 2.5 mm(2) area. It operates at a rate of 1000 frames per second with 10 MHz chip main clock. The experiment results demonstrate that a high-speed target can be tracked in complex static background and a high-speed target among other high-speed objects can be tracked in clean background.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

信息技术的不断进步使得软件产品的应用领域不断扩大,同时软件产品的规模也在迅速膨胀。软件产品的开发模式已由最初的手工作坊式开发逐渐转变为大规模的工程化软件开发。这使得资源调度问题逐渐成为软件项目管理的核心研究内容之一。 软件项目与传统工业项目相比具有如下两个显著特点:其一是软件项目对人力资源能力的依赖性非常高;其二是软件项目在开发过程中具有的不确定性因素较多,也就是风险较高。这两个特点决定了传统工业调度方法不能很好地适用于软件项目管理,同时也为软件项目资源优化调度问题研究提出了新的挑战。需要根据软件项目的特点研究适合的资源调度方法为软件项目管理工作提供支持。 本文的研究工作旨在通过对软件项目的结构进行分析和描述,在建立软件项目核心要素模型的基础上,考虑软件项目高人力资源能力依赖性和高风险性两大特征,由人力资源能力和风险作为驱动因素,对软件项目中人力资源和项目缓冲两大核心资源进行优化分配和调度,以提高软件项目的资源利用效率和软件项目执行的稳定性。本文的主要贡献有: (1)建立了软件项目资源优化调度研究框架QMMT和项目核心要素模型PTHR。 QMMT研究框架由问题驱动(Question Driven),模型描述(Model Description),方法研究(Method Research)和工具验证(Tool Validation)四个模块构成。四个模块之间既存在顺序关系也存在信息反馈机制,框架具有良好的适应性和可扩展性。实践表明,QMMT研究框架对研究软件项目资源优化调度问题具有良好的指导作用。本文中涉及软件项目资源优化调度的多个研究问题均遵循QMMT研究框架。 通过对软件项目所包含的各个要素及要素之间的关系进行定义和描述,我们建立了软件项目核心要素模型PTHR。模型对软件项目的四个要素:项目(Project)、任务(Task)、人力资源(Human Resource)、风险(Risk)以及四个要素之间的关系进行了形式化定义和描述。PTHR模型涵盖了软件项目的核心要素并具有良好的可扩展性,可以为资源优化调度中具体问题的分析、算法的设计、流程的安排以及工具开发提供底层支持。PTHR模型是本文后续方法中相关系列子模型的基础模型。 (2) 提出了软件项目中任务-人员匹配的三维匹配模型3D-THM和基于3D-THM模型的任务人员优化分配方法。 任务人员匹配是人力资源调度的基础。3D-THM(3 Dimensional model for Task Human Matching)模型通过对人力资源的技术能力、性格能力和职业规划进行描述,以及对任务的技术能力需求、性格能力需求和职业规划需求进行描述,设定相应的多因素匹配算法,为任务-人员的全面优化匹配提供支持。实验表明,3D-THM模型较好的描绘了软件项目中任务-人员优化匹配问题,能够体现软件项目的高人力资源能力依赖性。模型实例化后所得到的匹配方法和相应的原型工具可为软件项目资源优化调度以及软件过程建模提供人员优化匹配支持,能够提高项目管理人员的工作效率,提升项目人员对任务分配的满意度。 (3) 提出了基于人员可用性的人力资源调度方法。 在对任务人员进行优化匹配的基础上,通过综合考虑人力资源能力和工作时间实现了基于人员可用性的人力资源调度方法。方法结合软件项目的结构特征,建立了任务人员可用性约束模型THACM(Task Human resources Availability Constraints Model)。基于THACM模型实现了在给定资源集合、任务集合下的人力资源自动分配和项目进度的自动安排。方法可有效解决采用矩阵组织结构的企业所面临的低资源可见性问题,协助其提高人力资源的利用效率。 (4)提出了基于任务优先级的抢占式人力资源调度方法PP-HAS。 在对人力资源可用性进行考虑的基础上,为了解决多项目环境下常见的资源冲突问题,我们提出了基于任务优先级的抢占式人力资源调度方法PP-HAS(Task Priority Based Preemptive Human Resource Scheduling Method)。方法首先建立了综合考虑进度、成本、质量三方面因素的基于价值的任务优先级模型VBTPM(Value Based Task Priority Model),将该任务优先级模型与过程Agent技术结合,通过设计支持抢占的人力资源调度流程,实现了多过程Agent协商下的人力资源优化调度。方法通过抢占和再计划实现了人力资源的动态高效利用,能够为资源冲突的解决以及项目的再计划工作提供决策支持。 (5)提出了风险驱动的软件项目缓冲资源分配方法。 项目缓冲的合理分配是降低风险对项目进度造成影响的重要手段。我们在软件项目资源调度方法中加入对风险因素的考量,基于软件项目中风险的特征,建立了简化的风险模型RRM(Reduced Risk Model)。基于RRM模型提出了风险驱动的项目缓冲分配方法,旨在软件项目的执行效率和稳定性二者之间进行权衡。模拟实验的结果表明,相对于传统关键链项目管理理论中尾部集中的项目缓冲分配方法,风险驱动的项目缓冲分配方法能够在确保对项目平均执行工期产生较小影响的同时,显著降低项目执行时计划变更的发生频率。该缓冲分配方法与项目模拟工具可以帮助项目经理确定合适的项目缓冲时间长度以及缓冲分配方案,进而提高软件项目计划的可信性和执行的稳定性。

Relevância:

100.00% 100.00%

Publicador:

Resumo:

在面向目标追踪等应用的无线传感器网络研究中,协同任务分配机制的研究是很重要的。基于动态联盟机制的协同任务分配方法是事件触发的,适用于任务出现频率相对较低的大规模无线传感器网络。本文在基于动态联盟机制研究的基础上,首先引入了联盟覆盖范围和休眠盟员的概念,进一步消除针对同一任务的检测传感器节点的冗余,降低系统的能量消耗;而后又给出了一种动态联盟的更新机制,以保证动态联盟执行任务时的连续性,在一定程度上保证网络的检测性能。最后通过仿真,从系统总能耗、目标捕获率和检测误差标准差等方面检验了算法的性能,并给出了缓冲带宽度等参数对能耗和网络检测性能的影响。

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A model for representing music scores in a form suitable for general processing by a music-analyst-programmer is proposed and implemented. Typical input to the model consists of one or more pieces of music which are encoded in a file-based score representation. File-based representations are in a form unsuited for general processing, as they do not provide a suitable level of abstraction for a programmer-analyst. Instead, a representation is created giving a programmer's view of the score. This frees the analyst-programmer from implementation details, that otherwise would form a substantial barrier to progress. The score representation uses an object-oriented approach to create a natural and robust software environment for the musicologist. The system is used to explore ways in which it could benefit musicologists. Methodologies for analysing music corpora are presented in a series of analytic examples which illustrate some of the potential of this model. Proving hypotheses or performing analysis on corpora involves the construction of algorithms. Some unique aspects of using this score model for corpus-based musicology are: - Algorithms impose a discipline which arises from the necessity for formalism. - Automatic analysis enables musicologists to complete tasks that otherwise would be infeasible because of limitations of their energy, attentiveness, accuracy and time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses preconditioned Krylov subspace methods for solving large scale linear systems that originate from oil reservoir numerical simulations. Two types of preconditioners, one being based on an incomplete LU decomposition and the other being based on iterative algorithms, are used together in a combination strategy in order to achieve an adaptive and efficient preconditioner. Numerical tests show that different Krylov subspace methods combining with appropriate preconditioners are able to achieve optimal performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A number of high-performance VLSI architectures for real-time image coding applications are described. In particular, attention is focused on circuits for computing the 2-D DCT (discrete cosine transform) and for 2-D vector quantization. The former circuits are based on Winograd algorithms and comprise a number of bit-level systolic arrays with a bit-serial, word-parallel input. The latter circuits exhibit a similar data organization and consist of a number of inner product array circuits. Both circuits are highly regular and allow extremely high data rates to be achieved through extensive use of parallelism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Potentially inappropriate prescribing (PIP) in older people is common in primary care and can result in increased morbidity, adverse drug events, hospitalizations and mortality. The prevalence of PIP in Ireland is estimated at 36% with an associated expenditure of over [euro sign]45 million in 2007. The aim of this paper is to describe the application of the Medical Research Council (MRC) framework to the development of an intervention to decrease PIP in Irish primary care.

Methods: The MRC framework for the design and evaluation of complex interventions guided the development of the study intervention. In the development stage, literature was reviewed and combined with information obtained from experts in the field using a consensus based methodology and patient cases to define the main components of the intervention. In the pilot stage, five GPs tested the proposed intervention. Qualitative interviews were conducted with the GPs to inform the development and implementation of the intervention for the main randomised controlled trial.

Results: The literature review identified PIP criteria for inclusion in the study and two initial intervention components - academic detailing and medicines review supported by therapeutic treatment algorithms. Through patient case studies and a focus group with a group of 8 GPs, these components were refined and a third component of the intervention identified - patient information leaflets. The intervention was tested in a pilot study. In total, eight medicine reviews were conducted across five GP practices. These reviews addressed ten instances of PIP, nine of which were addressed in the form of either a dose reduction or a discontinuation of a targeted medication. Qualitative interviews highlighted that GPs were receptive to the intervention but patient preference and time needed both to prepare for and conduct the medicines review, emerged as potential barriers. Findings from the pilot study allowed further refinement to produce the finalised intervention of academic detailing with a pharmacist, medicines review with web-based therapeutic treatment algorithms and tailored patient information leaflets.

Conclusions: The MRC framework was used in the development of the OPTI-SCRIPT intervention to decrease the level of PIP in primary care in Ireland. Its application ensured that the intervention was developed using the best available evidence, was acceptable to GPs and feasible to deliver in the clinical setting. The effectiveness of this intervention is currently being tested in a pragmatic cluster randomised controlled trial.

Trial registration: Current controlled trials ISRCTN41694007.© 2013 Clyne et al.; licensee BioMed Central Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of smart grid technologies and appropriate charging strategies are key to accommodating large numbers of Electric Vehicles (EV) charging on the grid. In this paper a general framework is presented for formulating the EV charging optimization problem and three different charging strategies are investigated and compared from the perspective of charging fairness while taking into account power system constraints. Two strategies are based on distributed algorithms, namely, Additive Increase and Multiplicative Decrease (AIMD), and Distributed Price-Feedback (DPF), while the third is an ideal centralized solution used to benchmark performance. The algorithms are evaluated using a simulation of a typical residential low voltage distribution network with 50% EV penetration. © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE Potentially inappropriate prescribing (PIP) is common in older people and can result in increased morbidity, adverse drug events, and hospitalizations. The OPTI-SCRIPT study (Optimizing Prescribing for Older People in Primary Care, a cluster-randomized controlled trial) tested the effectiveness of a multifaceted intervention for reducing PIP in primary care.

METHODS We conducted a cluster-randomized controlled trial among 21 general practitioner practices and 196 patients with PIP. Intervention participants received a complex, multifaceted intervention incorporating academic detailing; review of medicines with web-based pharmaceutical treatment algorithms that provide recommended alternative-treatment options; and tailored patient information leaflets. Control practices delivered usual care and received simple, patient-level PIP feedback. Primary outcomes were the proportion of patients with PIP and the mean number of potentially inappropriate prescriptions. We performed intention-to-treat analysis using random-effects regression.

RESULTS All 21 practices and 190 patients were followed. At intervention completion, patients in the intervention group had significantly lower odds of having PIP than patients in the control group (adjusted odds ratio = 0.32; 95% CI, 0.15–0.70; P = .02). The mean number of PIP drugs in the intervention group was 0.70, compared with 1.18 in the control group (P = .02). The intervention group was almost one-third less likely than the control group to have PIP drugs at intervention completion, but this difference was not significant (incidence rate ratio = 0.71; 95% CI, 0.50–1.02; P = .49). The intervention was effective in reducing proton pump inhibitor prescribing (adjusted odds ratio = 0.30; 95% CI, 0.14–0.68; P = .04).

CONCLUSIONS The OPTI-SCRIPT intervention incorporating academic detailing with a pharmacist, and a review of medicines with web-based pharmaceutical treatment algorithms, was effective in reducing PIP, particularly in modifying prescribing of proton pump inhibitors, the most commonly occurring PIP drugs nationally.