929 resultados para Pattern-search methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nurse rostering is a complex scheduling problem that affects hospital personnel on a daily basis all over the world. This paper presents a new component-based approach with adaptive perturbations, for a nurse scheduling problem arising at a major UK hospital. The main idea behind this technique is to decompose a schedule into its components (i.e. the allocated shift pattern of each nurse), and then mimic a natural evolutionary process on these components to iteratively deliver better schedules. The worthiness of all components in the schedule has to be continuously demonstrated in order for them to remain there. This demonstration employs a dynamic evaluation function which evaluates how well each component contributes towards the final objective. Two perturbation steps are then applied: the first perturbation eliminates a number of components that are deemed not worthy to stay in the current schedule; the second perturbation may also throw out, with a low level of probability, some worthy components. The eliminated components are replenished with new ones using a set of constructive heuristics using local optimality criteria. Computational results using 52 data instances demonstrate the applicability of the proposed approach in solving real-world problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Close similarities have been found between the otoliths of sea-caught and laboratory-reared larvae of the common sole Solea solea (L.), given appropriate temperatures and nourishment of the latter. But from hatching to mouth formation. and during metamorphosis, sole otoliths have proven difficult to read because the increments may be less regular and low contrast. In this study, the growth increments in otoliths of larvae reared at 12 degrees C were counted by light microscopy to test the hypothesis of daily deposition, with some results verified using scanning electron microscopy (SEM), and by image analysis in order to compare the reliability of the 2 methods in age estimation. Age was first estimated (in days posthatch) from light micrographs of whole mounted otoliths. Counts were initiated from the increment formed at the time of month opening (Day 4). The average incremental deposition rate was consistent with the daily hypothesis. However, the light-micrograph readings tended to underestimate the mean ages of the larvae. Errors were probably associated with the low-contrast increments: those deposited after the mouth formation during the transition to first feeding, and those deposited from the onset of eye migration (about 20 d posthatch) during metamorphosis. SEM failed to resolve these low-contrast areas accurately because of poor etching. A method using image analysis was applied to a subsample of micrograph-counted otoliths. The image analysis was supported by an algorithm of pattern recognition (Growth Demodulation Algorithm, GDA). On each otolith, the GDA method integrated the growth pattern of these larval otoliths to averaged data from different radial profiles, in order to demodulate the exponential trend of the signal before spectral analysis (Fast Fourier Transformation, FFT). This second method both allowed more precise designation of increments, particularly for low-contrast areas, and more accurate readings but increased error in mean age estimation. The variability is probably due to a still rough perception of otolith increments by the GDA method, counting being achieved through a theoretical exponential pattern and mean estimates being given by FFT. Although this error variability was greater than expected, the method provides for improvement in both speed and accuracy in otolith readings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates how web search evaluation can be improved using historical interaction data. Modern search engines combine offline and online evaluation approaches in a sequence of steps that a tested change needs to pass through to be accepted as an improvement and subsequently deployed. We refer to such a sequence of steps as an evaluation pipeline. In this thesis, we consider the evaluation pipeline to contain three sequential steps: an offline evaluation step, an online evaluation scheduling step, and an online evaluation step. In this thesis we show that historical user interaction data can aid in improving the accuracy or efficiency of each of the steps of the web search evaluation pipeline. As a result of these improvements, the overall efficiency of the entire evaluation pipeline is increased. Firstly, we investigate how user interaction data can be used to build accurate offline evaluation methods for query auto-completion mechanisms. We propose a family of offline evaluation metrics for query auto-completion that represents the effort the user has to spend in order to submit their query. The parameters of our proposed metrics are trained against a set of user interactions recorded in the search engine’s query logs. From our experimental study, we observe that our proposed metrics are significantly more correlated with an online user satisfaction indicator than the metrics proposed in the existing literature. Hence, fewer changes will pass the offline evaluation step to be rejected after the online evaluation step. As a result, this would allow us to achieve a higher efficiency of the entire evaluation pipeline. Secondly, we state the problem of the optimised scheduling of online experiments. We tackle this problem by considering a greedy scheduler that prioritises the evaluation queue according to the predicted likelihood of success of a particular experiment. This predictor is trained on a set of online experiments, and uses a diverse set of features to represent an online experiment. Our study demonstrates that a higher number of successful experiments per unit of time can be achieved by deploying such a scheduler on the second step of the evaluation pipeline. Consequently, we argue that the efficiency of the evaluation pipeline can be increased. Next, to improve the efficiency of the online evaluation step, we propose the Generalised Team Draft interleaving framework. Generalised Team Draft considers both the interleaving policy (how often a particular combination of results is shown) and click scoring (how important each click is) as parameters in a data-driven optimisation of the interleaving sensitivity. Further, Generalised Team Draft is applicable beyond domains with a list-based representation of results, i.e. in domains with a grid-based representation, such as image search. Our study using datasets of interleaving experiments performed both in document and image search domains demonstrates that Generalised Team Draft achieves the highest sensitivity. A higher sensitivity indicates that the interleaving experiments can be deployed for a shorter period of time or use a smaller sample of users. Importantly, Generalised Team Draft optimises the interleaving parameters w.r.t. historical interaction data recorded in the interleaving experiments. Finally, we propose to apply the sequential testing methods to reduce the mean deployment time for the interleaving experiments. We adapt two sequential tests for the interleaving experimentation. We demonstrate that one can achieve a significant decrease in experiment duration by using such sequential testing methods. The highest efficiency is achieved by the sequential tests that adjust their stopping thresholds using historical interaction data recorded in diagnostic experiments. Our further experimental study demonstrates that cumulative gains in the online experimentation efficiency can be achieved by combining the interleaving sensitivity optimisation approaches, including Generalised Team Draft, and the sequential testing approaches. Overall, the central contributions of this thesis are the proposed approaches to improve the accuracy or efficiency of the steps of the evaluation pipeline: the offline evaluation frameworks for the query auto-completion, an approach for the optimised scheduling of online experiments, a general framework for the efficient online interleaving evaluation, and a sequential testing approach for the online search evaluation. The experiments in this thesis are based on massive real-life datasets obtained from Yandex, a leading commercial search engine. These experiments demonstrate the potential of the proposed approaches to improve the efficiency of the evaluation pipeline.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the era of an information revolution customers come in contact with huge numbers of marketing messages in their everyday lives. This leads to so called, information noise, which may result in many messages going unnoticed. Thus marketing managers are forced to search for more effectively ways of getting customers’ attention and are more willingly to use unconventional promotional methods based on using original forms or places in an effort to create the element of surprise and make their message more eye-catching for the targeted audience. This kind of move tends to evoke strong emotions, motiving readers to pass the message on. Creating such a buzz around brand name would also enhance the campaign’s effect. Unfortunately, this form of communication also brings with it some negative effects due to controversial or taboo topics connected with sensitive social issues. An analysis of the advertising methods used by the owners of the most powerful Polish brands shows that there is some evidence of the use of such methods in Poland. Nevertheless unconventional advertising methods are not a commonly used practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

International audience

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This document introduces the planned new search for the neutron Electric Dipole Moment at the Spallation Neutron Source at the Oak Ridge National Laboratory. A spin precession measurement is to be carried out using Ultracold neutrons diluted in a superfluid Helium bath at T = 0.5 K, where spin polarized 3He atoms act as detector of the neutron spin polarization. This manuscript describes some of the key aspects of the planned experiment with the contributions from Caltech to the development of the project.

Techniques used in the design of magnet coils for Nuclear Magnetic Resonance were adapted to the geometry of the experiment. Described is an initial design approach using a pair of coils tuned to shield outer conductive elements from resistive heat loads, while inducing an oscillating field in the measurement volume. A small prototype was constructed to test the model of the field at room temperature.

A large scale test of the high voltage system was carried out in a collaborative effort at the Los Alamos National Laboratory. The application and amplification of high voltage to polished steel electrodes immersed in a superfluid Helium bath was studied, as well as the electrical breakdown properties of the electrodes at low temperatures. A suite of Monte Carlo simulation software tools to model the interaction of neutrons, 3He atoms, and their spins with the experimental magnetic and electric fields was developed and implemented to further the study of expected systematic effects of the measurement, with particular focus on the false Electric Dipole Moment induced by a Geometric Phase akin to Berry’s phase.

An analysis framework was developed and implemented using unbinned likelihood to fit the time modulated signal expected from the measurement data. A collaborative Monte Carlo data set was used to test the analysis methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim: To evaluate the association between oral health status, socio-demographic and behavioral factors with the pattern of maturity of normal epithelial oral mucosa. Methods: Exfoliative cytology specimens were collected from 117 men from the border of the tongue and floor of the mouth on opposite sides. Cells were stained with the Papanicolaou method and classified into: anucleated, superficial cells with nuclei, intermediate and parabasal cells. Quantification was made by selecting the first 100 cells in each glass slide. Sociodemographic and behavioral variables were collected from a structured questionnaire. Oral health was analyzed by clinical examination, recording decayed, missing and filled teeth index (DMFT) and use of prostheses. Multivariable linear regression models were applied. Results: No significant differences for all studied variables influenced the pattern of maturation of the oral mucosa except for alcohol consumption. There was an increase of cell surface layers of the epithelium with the chronic use of alcohol. Conclusions: It is appropriate to use Papanicolaou cytopathological technique to analyze the maturation pattern of exposed subjects, with a strong recommendation for those who use alcohol - a risk factor for oral cancer, in which a change in the proportion of cell types is easily detected.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tese de dout. Ciências e Tecnologias do Ambiente, Faculdade de Ciências do Mar e do Ambiente, Univ. do Algarve, 2004

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When it comes to information sets in real life, often pieces of the whole set may not be available. This problem can find its origin in various reasons, describing therefore different patterns. In the literature, this problem is known as Missing Data. This issue can be fixed in various ways, from not taking into consideration incomplete observations, to guessing what those values originally were, or just ignoring the fact that some values are missing. The methods used to estimate missing data are called Imputation Methods. The work presented in this thesis has two main goals. The first one is to determine whether any kind of interactions exists between Missing Data, Imputation Methods and Supervised Classification algorithms, when they are applied together. For this first problem we consider a scenario in which the databases used are discrete, understanding discrete as that it is assumed that there is no relation between observations. These datasets underwent processes involving different combina- tions of the three components mentioned. The outcome showed that the missing data pattern strongly influences the outcome produced by a classifier. Also, in some of the cases, the complex imputation techniques investigated in the thesis were able to obtain better results than simple ones. The second goal of this work is to propose a new imputation strategy, but this time we constrain the specifications of the previous problem to a special kind of datasets, the multivariate Time Series. We designed new imputation techniques for this particular domain, and combined them with some of the contrasted strategies tested in the pre- vious chapter of this thesis. The time series also were subjected to processes involving missing data and imputation to finally propose an overall better imputation method. In the final chapter of this work, a real-world example is presented, describing a wa- ter quality prediction problem. The databases that characterized this problem had their own original latent values, which provides a real-world benchmark to test the algorithms developed in this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes and investigates a metaheuristic tabu search algorithm (TSA) that generates optimal or near optimal solutions sequences for the feedback length minimization problem (FLMP) associated to a design structure matrix (DSM). The FLMP is a non-linear combinatorial optimization problem, belonging to the NP-hard class, and therefore finding an exact optimal solution is very hard and time consuming, especially on medium and large problem instances. First, we introduce the subject and provide a review of the related literature and problem definitions. Using the tabu search method (TSM) paradigm, this paper presents a new tabu search algorithm that generates optimal or sub-optimal solutions for the feedback length minimization problem, using two different neighborhoods based on swaps of two activities and shifting an activity to a different position. Furthermore, this paper includes numerical results for analyzing the performance of the proposed TSA and for fixing the proper values of its parameters. Then we compare our results on benchmarked problems with those already published in the literature. We conclude that the proposed tabu search algorithm is very promising because it outperforms the existing methods, and because no other tabu search method for the FLMP is reported in the literature. The proposed tabu search algorithm applied to the process layer of the multidimensional design structure matrices proves to be a key optimization method for an optimal product development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes and investigates a metaheuristic tabu search algorithm (TSA) that generates optimal or near optimal solutions sequences for the feedback length minimization problem (FLMP) associated to a design structure matrix (DSM). The FLMP is a non-linear combinatorial optimization problem, belonging to the NP-hard class, and therefore finding an exact optimal solution is very hard and time consuming, especially on medium and large problem instances. First, we introduce the subject and provide a review of the related literature and problem definitions. Using the tabu search method (TSM) paradigm, this paper presents a new tabu search algorithm that generates optimal or sub-optimal solutions for the feedback length minimization problem, using two different neighborhoods based on swaps of two activities and shifting an activity to a different position. Furthermore, this paper includes numerical results for analyzing the performance of the proposed TSA and for fixing the proper values of its parameters. Then we compare our results on benchmarked problems with those already published in the literature. We conclude that the proposed tabu search algorithm is very promising because it outperforms the existing methods, and because no other tabu search method for the FLMP is reported in the literature. The proposed tabu search algorithm applied to the process layer of the multidimensional design structure matrices proves to be a key optimization method for an optimal product development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image and video compression play a major role in the world today, allowing the storage and transmission of large multimedia content volumes. However, the processing of this information requires high computational resources, hence the improvement of the computational performance of these compression algorithms is very important. The Multidimensional Multiscale Parser (MMP) is a pattern-matching-based compression algorithm for multimedia contents, namely images, achieving high compression ratios, maintaining good image quality, Rodrigues et al. [2008]. However, in comparison with other existing algorithms, this algorithm takes some time to execute. Therefore, two parallel implementations for GPUs were proposed by Ribeiro [2016] and Silva [2015] in CUDA and OpenCL-GPU, respectively. In this dissertation, to complement the referred work, we propose two parallel versions that run the MMP algorithm in CPU: one resorting to OpenMP and another that converts the existing OpenCL-GPU into OpenCL-CPU. The proposed solutions are able to improve the computational performance of MMP by 3 and 2:7 , respectively. The High Efficiency Video Coding (HEVC/H.265) is the most recent standard for compression of image and video. Its impressive compression performance, makes it a target for many adaptations, particularly for holoscopic image/video processing (or light field). Some of the proposed modifications to encode this new multimedia content are based on geometry-based disparity compensations (SS), developed by Conti et al. [2014], and a Geometric Transformations (GT) module, proposed by Monteiro et al. [2015]. These compression algorithms for holoscopic images based on HEVC present an implementation of specific search for similar micro-images that is more efficient than the one performed by HEVC, but its implementation is considerably slower than HEVC. In order to enable better execution times, we choose to use the OpenCL API as the GPU enabling language in order to increase the module performance. With its most costly setting, we are able to reduce the GT module execution time from 6.9 days to less then 4 hours, effectively attaining a speedup of 45 .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Process systems design, operation and synthesis problems under uncertainty can readily be formulated as two-stage stochastic mixed-integer linear and nonlinear (nonconvex) programming (MILP and MINLP) problems. These problems, with a scenario based formulation, lead to large-scale MILPs/MINLPs that are well structured. The first part of the thesis proposes a new finitely convergent cross decomposition method (CD), where Benders decomposition (BD) and Dantzig-Wolfe decomposition (DWD) are combined in a unified framework to improve the solution of scenario based two-stage stochastic MILPs. This method alternates between DWD iterations and BD iterations, where DWD restricted master problems and BD primal problems yield a sequence of upper bounds, and BD relaxed master problems yield a sequence of lower bounds. A variant of CD, which includes multiple columns per iteration of DW restricted master problem and multiple cuts per iteration of BD relaxed master problem, called multicolumn-multicut CD is then developed to improve solution time. Finally, an extended cross decomposition method (ECD) for solving two-stage stochastic programs with risk constraints is proposed. In this approach, a CD approach at the first level and DWD at a second level is used to solve the original problem to optimality. ECD has a computational advantage over a bilevel decomposition strategy or solving the monolith problem using an MILP solver. The second part of the thesis develops a joint decomposition approach combining Lagrangian decomposition (LD) and generalized Benders decomposition (GBD), to efficiently solve stochastic mixed-integer nonlinear nonconvex programming problems to global optimality, without the need for explicit branch and bound search. In this approach, LD subproblems and GBD subproblems are systematically solved in a single framework. The relaxed master problem obtained from the reformulation of the original problem, is solved only when necessary. A convexification of the relaxed master problem and a domain reduction procedure are integrated into the decomposition framework to improve solution efficiency. Using case studies taken from renewable resource and fossil-fuel based application in process systems engineering, it can be seen that these novel decomposition approaches have significant benefit over classical decomposition methods and state-of-the-art MILP/MINLP global optimization solvers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este trabajo se inscribe en uno de los grandes campos de los estudios organizacionales: la estrategia. La perspectiva clásica en este campo promovió la idea de que proyectarse hacia el futuro implica diseñar un plan (una serie de acciones deliberadas). Avances posteriores mostraron que la estrategia podía ser comprendida de otras formas. Sin embargo, la evolución del campo privilegió en alguna medida la mirada clásica estableciendo, por ejemplo, múltiples modelos para ‘formular’ una estrategia, pero dejando en segundo lugar la manera en la que esta puede ‘emerger’. El propósito de esta investigación es, entonces, aportar al actual nivel de comprensión respecto a las estrategias emergentes en las organizaciones. Para hacerlo, se consideró un concepto opuesto —aunque complementario— al de ‘planeación’ y, de hecho, muy cercano en su naturaleza a ese tipo de estrategias: la improvisación. Dado que este se ha nutrido de valiosos aportes del mundo de la música, se acudió al saber propio de este dominio, recurriendo al uso de ‘la metáfora’ como recurso teórico para entenderlo y alcanzar el objetivo propuesto. Los resultados muestran que 1) las estrategias deliberadas y las emergentes coexisten y se complementan, 2) la improvisación está siempre presente en el contexto organizacional, 3) existe una mayor intensidad de la improvisación en el ‘como’ de la estrategia que en el ‘qué’ y, en oposición a la idea convencional al respecto, 4) se requiere cierta preparación para poder improvisar de manera adecuada.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El presente trabajo se realizó con el objetivo de tener una visión completa de las teorías del liderazgo, teniendo de este una concepción como proceso y poder examinar las diversas formas de aplicación en las organizaciones contemporáneas. El tema es enfocado desde la perspectiva organizacional, un mundo igualmente complejo, sin desconocer su importancia en otros ámbitos como la educación, la política o la dirección del estado. Su enfoque tiene que ver con el estudio académico del cual es la culminación y se enmarca dentro de la perspectiva constitucional de la Carta Política Colombiana que reconoce la importancia capital que tienen la actividad económica y la iniciativa privada en la constitución de empresas. Las diversas visiones del liderazgo han sido aplicadas de distintas maneras en las organizaciones contemporáneas y han generado diversos resultados. Hoy, no es posible pensar en una organización que no haya definido su forma de liderazgo y en consecuencia, confluyen en el campo empresarial multitud de teorías, sin que pueda afirmarse que una sola de ellas permita el manejo adecuado y el cumplimiento de los objetivos misionales. Por esta razón se ha llegado a concebir el liderazgo como una función compleja, en un mundo donde las organizaciones mismas se caracterizan no solo por la complejidad de sus acciones y de su conformación, sino también porque esta característica pertenece también al mundo de la globalización. Las organizaciones concebidas como máquinas que en sentido metafórico logran reconstituirse sus estructuras a medida que están en interacción con otras en el mundo globalizado. Adaptarse a las cambiantes circunstancias hace de las organizaciones conglomerados en permanente dinámica y evolución. En este ámbito puede decirse que el liderazgo es también complejo y que es el liderazgo transformacional el que más se acerca al sentido de la complejidad.