987 resultados para General Algorithm
Resumo:
The general flowshop scheduling problem is a production problem where a set of n jobs have to be processed with identical flow pattern on in machines. In permutation flowshops the sequence of jobs is the same on all machines. A significant research effort has been devoted for sequencing jobs in a flowshop minimizing the makespan. This paper describes the application of a Constructive Genetic Algorithm (CGA) to makespan minimization on flowshop scheduling. The CGA was proposed recently as an alternative to traditional GA approaches, particularly, for evaluating schemata directly. The population initially formed only by schemata, evolves controlled by recombination to a population of well-adapted structures (schemata instantiation). The CGA implemented is based on the NEH classic heuristic and a local search heuristic used to define the fitness functions. The parameters of the CGA are calibrated using a Design of Experiments (DOE) approach. The computational results are compared against some other successful algorithms from the literature on Taillard`s well-known standard benchmark. The computational experience shows that this innovative CGA approach provides competitive results for flowshop scheduling; problems. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The main goal of this paper is to apply the so-called policy iteration algorithm (PIA) for the long run average continuous control problem of piecewise deterministic Markov processes (PDMP`s) taking values in a general Borel space and with compact action space depending on the state variable. In order to do that we first derive some important properties for a pseudo-Poisson equation associated to the problem. In the sequence it is shown that the convergence of the PIA to a solution satisfying the optimality equation holds under some classical hypotheses and that this optimal solution yields to an optimal control strategy for the average control problem for the continuous-time PDMP in a feedback form.
Resumo:
Recently Adams and Bischof (1994) proposed a novel region growing algorithm for segmenting intensity images. The inputs to the algorithm are the intensity image and a set of seeds - individual points or connected components - that identify the individual regions to be segmented. The algorithm grows these seed regions until all of the image pixels have been assimilated. Unfortunately the algorithm is inherently dependent on the order of pixel processing. This means, for example, that raster order processing and anti-raster order processing do not, in general, lead to the same tessellation. In this paper we propose an improved seeded region growing algorithm that retains the advantages of the Adams and Bischof algorithm fast execution, robust segmentation, and no tuning parameters - but is pixel order independent. (C) 1997 Elsevier Science B.V.
Resumo:
Objective: To examine the quality of diabetes care and prevention of cardiovascular disease (CVD) in Australian general practice patients with type 2 diabetes and to investigate its relationship with coronary heart disease absolute risk (CHDAR). Methods: A total of 3286 patient records were extracted from registers of patients with type 2 diabetes held by 16 divisions of general practice (250 practices) across Australia for the year 2002. CHDAR was estimated using the United Kingdom Prospective Diabetes Study algorithm with higher CHDAR set at a 10 year risk of >15%. Multivariate multilevel logistic regression investigated the association between CHDAR and diabetes care. Results: 47.9% of diabetic patient records had glycosylated haemoglobin (HbA1c) >7%, 87.6% had total cholesterol >= 4.0 mmol/l, and 73.8% had blood pressure (BP) >= 130/85 mm Hg. 57.6% of patients were at a higher CHDAR, 76.8% of whom were not on lipid modifying medication and 66.2% were not on antihypertensive medication. After adjusting for clustering at the general practice level and age, lipid modifying medication was negatively related to CHDAR (odds ratio (OR) 0.84) and total cholesterol. Antihypertensive medication was positively related to systolic BP but negatively related to CHDAR (OR 0.88). Referral to ophthalmologists/optometrists and attendance at other health professionals were not related to CHDAR. Conclusions: At the time of the study the diabetes and CVD preventive care in Australian general practice was suboptimal, even after a number of national initiatives. The Australian Pharmaceutical Benefits Scheme (PBS) guidelines need to be modified to improve CVD preventive care in patients with type 2 diabetes.
Resumo:
“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.
Resumo:
Background: Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL) for the development of hazardous drinking in safe drinkers. Methods: A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score >= 8 in men and >= 5 in women. Results: 69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873). The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51). External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846) and Hedge's g of 0.68 (95% CI 0.57, 0.78). Conclusions: The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.
Resumo:
El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme
Resumo:
We propose a non-equidistant Q rate matrix formula and an adaptive numerical algorithm for a continuous time Markov chain to approximate jump-diffusions with affine or non-affine functional specifications. Our approach also accommodates state-dependent jump intensity and jump distribution, a flexibility that is very hard to achieve with other numerical methods. The Kolmogorov-Smirnov test shows that the proposed Markov chain transition density converges to the one given by the likelihood expansion formula as in Ait-Sahalia (2008). We provide numerical examples for European stock option pricing in Black and Scholes (1973), Merton (1976) and Kou (2002).
Resumo:
Background: Attention to patients with acute minor-illnesses requesting same-day consultation represents a major burden in primary care. The workload is assumed by general practitioners in many countries. A number of reports suggest that care to these patients may be provided, at in least in part, by nurses. However, there is scarce information with respect to the applicability of a program of nurse management for adult patients with acute minor-illnesses in large areas. The aim of this study is to assess the effectiveness of a program of nurse algorithm-guided care for adult patients with acute minor illnesses requesting same-day consultation in primary care in a largely populated area. Methods: A cross-sectional study of all adult patients seeking same day consultation for 16 common acute minor illnesses in a large geographical area with 284 primary care practices. Patients were included in a program of nurse case management using management algorithms. The main outcome measure was case resolution, defined as completion of the algorithm by the nurse without need of referral of the patient to the general practitioner. The secondary outcome measure was return to consultation, defined as requirement of new consultation for the same reason as the first one, in primary care within a 7-day period. Results: During a two year period (April 2009-April 2011), a total of 1,209,669 consultations were performed in the program. Case resolution was achieved by nurses in 62.5% of consultations. The remaining cases were referred to a general practitioner. Resolution rates ranged from 94.2% in patients with burns to 42% in patients with upper respiratory symptoms. None of the 16 minor illnesses had a resolution rate below 40%. Return to consultation during a 7-day period was low, only 4.6%. Conclusions: A program of algorithms-guided care is effective for nurse case management of patients requesting same day consultation for minor illnesses in primary care.
Resumo:
Background: Attention to patients with acute minor-illnesses requesting same-day consultation represents a major burden in primary care. The workload is assumed by general practitioners in many countries. A number of reports suggest that care to these patients may be provided, at in least in part, by nurses. However, there is scarce information with respect to the applicability of a program of nurse management for adult patients with acute minor-illnesses in large areas. The aim of this study is to assess the effectiveness of a program of nurse algorithm-guided care for adult patients with acute minor illnesses requesting same-day consultation in primary care in a largely populated area. Methods: A cross-sectional study of all adult patients seeking same day consultation for 16 common acute minor illnesses in a large geographical area with 284 primary care practices. Patients were included in a program of nurse case management using management algorithms. The main outcome measure was case resolution, defined as completion of the algorithm by the nurse without need of referral of the patient to the general practitioner. The secondary outcome measure was return to consultation, defined as requirement of new consultation for the same reason as the first one, in primary care within a 7-day period. Results: During a two year period (April 2009-April 2011), a total of 1,209,669 consultations were performed in the program. Case resolution was achieved by nurses in 62.5% of consultations. The remaining cases were referred to a general practitioner. Resolution rates ranged from 94.2% in patients with burns to 42% in patients with upper respiratory symptoms. None of the 16 minor illnesses had a resolution rate below 40%. Return to consultation during a 7-day period was low, only 4.6%. Conclusions: A program of algorithms-guided care is effective for nurse case management of patients requesting same day consultation for minor illnesses in primary care.
Resumo:
Background: Attention to patients with acute minor-illnesses requesting same-day consultation represents a major burden in primary care. The workload is assumed by general practitioners in many countries. A number of reports suggest that care to these patients may be provided, at in least in part, by nurses. However, there is scarce information with respect to the applicability of a program of nurse management for adult patients with acute minor-illnesses in large areas. The aim of this study is to assess the effectiveness of a program of nurse algorithm-guided care for adult patients with acute minor illnesses requesting same-day consultation in primary care in a largely populated area. Methods: A cross-sectional study of all adult patients seeking same day consultation for 16 common acute minor illnesses in a large geographical area with 284 primary care practices. Patients were included in a program of nurse case management using management algorithms. The main outcome measure was case resolution, defined as completion of the algorithm by the nurse without need of referral of the patient to the general practitioner. The secondary outcome measure was return to consultation, defined as requirement of new consultation for the same reason as the first one, in primary care within a 7-day period. Results: During a two year period (April 2009-April 2011), a total of 1,209,669 consultations were performed in the program. Case resolution was achieved by nurses in 62.5% of consultations. The remaining cases were referred to a general practitioner. Resolution rates ranged from 94.2% in patients with burns to 42% in patients with upper respiratory symptoms. None of the 16 minor illnesses had a resolution rate below 40%. Return to consultation during a 7-day period was low, only 4.6%. Conclusions: A program of algorithms-guided care is effective for nurse case management of patients requesting same day consultation for minor illnesses in primary care.
Resumo:
General clustering deals with weighted objects and fuzzy memberships. We investigate the group- or object-aggregation-invariance properties possessed by the relevant functionals (effective number of groups or objects, centroids, dispersion, mutual object-group information, etc.). The classical squared Euclidean case can be generalized to non-Euclidean distances, as well as to non-linear transformations of the memberships, yielding the c-means clustering algorithm as well as two presumably new procedures, the convex and pairwise convex clustering. Cluster stability and aggregation-invariance of the optimal memberships associated to the various clustering schemes are examined as well.
Resumo:
This paper presents the general regression neural networks (GRNN) as a nonlinear regression method for the interpolation of monthly wind speeds in complex Alpine orography. GRNN is trained using data coming from Swiss meteorological networks to learn the statistical relationship between topographic features and wind speed. The terrain convexity, slope and exposure are considered by extracting features from the digital elevation model at different spatial scales using specialised convolution filters. A database of gridded monthly wind speeds is then constructed by applying GRNN in prediction mode during the period 1968-2008. This study demonstrates that using topographic features as inputs in GRNN significantly reduces cross-validation errors with respect to low-dimensional models integrating only geographical coordinates and terrain height for the interpolation of wind speed. The spatial predictability of wind speed is found to be lower in summer than in winter due to more complex and weaker wind-topography relationships. The relevance of these relationships is studied using an adaptive version of the GRNN algorithm which allows to select the useful terrain features by eliminating the noisy ones. This research provides a framework for extending the low-dimensional interpolation models to high-dimensional spaces by integrating additional features accounting for the topographic conditions at multiple spatial scales. Copyright (c) 2012 Royal Meteorological Society.
Resumo:
This work describes a simulation tool being developed at UPC to predict the microwave nonlinear behavior of planar superconducting structures with very few restrictions on the geometry of the planar layout. The software is intended to be applicable to most structures used in planar HTS circuits, including line, patch, and quasi-lumped microstrip resonators. The tool combines Method of Moments (MoM) algorithms for general electromagnetic simulation with Harmonic Balance algorithms to take into account the nonlinearities in the HTS material. The Method of Moments code is based on discretization of the Electric Field Integral Equation in Rao, Wilton and Glisson Basis Functions. The multilayer dyadic Green's function is used with Sommerfeld integral formulation. The Harmonic Balance algorithm has been adapted to this application where the nonlinearity is distributed and where compatibility with the MoM algorithm is required. Tests of the algorithm in TM010 disk resonators agree with closed-form equations for both the fundamental and third-order intermodulation currents. Simulations of hairpin resonators show good qualitative agreement with previously published results, but it is found that a finer meshing would be necessary to get correct quantitative results. Possible improvements are suggested.
Resumo:
The parameter setting of a differential evolution algorithm must meet several requirements: efficiency, effectiveness, and reliability. Problems vary. The solution of a particular problem can be represented in different ways. An algorithm most efficient in dealing with a particular representation may be less efficient in dealing with other representations. The development of differential evolution-based methods contributes substantially to research on evolutionary computing and global optimization in general. The objective of this study is to investigatethe differential evolution algorithm, the intelligent adjustment of its controlparameters, and its application. In the thesis, the differential evolution algorithm is first examined using different parameter settings and test functions. Fuzzy control is then employed to make control parameters adaptive based on an optimization process and expert knowledge. The developed algorithms are applied to training radial basis function networks for function approximation with possible variables including centers, widths, and weights of basis functions and both having control parameters kept fixed and adjusted by fuzzy controller. After the influence of control variables on the performance of the differential evolution algorithm was explored, an adaptive version of the differential evolution algorithm was developed and the differential evolution-based radial basis function network training approaches were proposed. Experimental results showed that the performance of the differential evolution algorithm is sensitive to parameter setting, and the best setting was found to be problem dependent. The fuzzy adaptive differential evolution algorithm releases the user load of parameter setting and performs better than those using all fixedparameters. Differential evolution-based approaches are effective for training Gaussian radial basis function networks.