815 resultados para Synchronization Algorithm
Resumo:
A systolic array to implement lattice-reduction-aided lineardetection is proposed for a MIMO receiver. The lattice reductionalgorithm and the ensuing linear detections are operated in the same array, which can be hardware-efficient. All-swap lattice reduction algorithm (ASLR) is considered for the systolic design.ASLR is a variant of the LLL algorithm, which processes all lattice basis vectors within one iteration. Lattice-reduction-aided linear detection based on ASLR and LLL algorithms have very similarbit-error-rate performance, while ASLR is more time efficient inthe systolic array, especially for systems with a large number ofantennas.
Resumo:
From a managerial point of view, the more effcient, simple, and parameter-free (ESP) an algorithm is, the more likely it will be used in practice for solving real-life problems. Following this principle, an ESP algorithm for solving the Permutation Flowshop Sequencing Problem (PFSP) is proposed in this article. Using an Iterated Local Search (ILS) framework, the so-called ILS-ESP algorithm is able to compete in performance with other well-known ILS-based approaches, which are considered among the most effcient algorithms for the PFSP. However, while other similar approaches still employ several parameters that can affect their performance if not properly chosen, our algorithm does not require any particular fine-tuning process since it uses basic "common sense" rules for the local search, perturbation, and acceptance criterion stages of the ILS metaheuristic. Our approach defines a new operator for the ILS perturbation process, a new acceptance criterion based on extremely simple and transparent rules, and a biased randomization process of the initial solution to randomly generate different alternative initial solutions of similar quality -which is attained by applying a biased randomization to a classical PFSP heuristic. This diversification of the initial solution aims at avoiding poorly designed starting points and, thus, allows the methodology to take advantage of current trends in parallel and distributed computing. A set of extensive tests, based on literature benchmarks, has been carried out in order to validate our algorithm and compare it against other approaches. These tests show that our parameter-free algorithm is able to compete with state-of-the-art metaheuristics for the PFSP. Also, the experiments show that, when using parallel computing, it is possible to improve the top ILS-based metaheuristic by just incorporating to it our biased randomization process with a high-quality pseudo-random number generator.
Resumo:
The standard one-machine scheduling problem consists in schedulinga set of jobs in one machine which can handle only one job at atime, minimizing the maximum lateness. Each job is available forprocessing at its release date, requires a known processing timeand after finishing the processing, it is delivery after a certaintime. There also can exists precedence constraints between pairsof jobs, requiring that the first jobs must be completed beforethe second job can start. An extension of this problem consistsin assigning a time interval between the processing of the jobsassociated with the precedence constrains, known by finish-starttime-lags. In presence of this constraints, the problem is NP-hardeven if preemption is allowed. In this work, we consider a specialcase of the one-machine preemption scheduling problem with time-lags, where the time-lags have a chain form, and propose apolynomial algorithm to solve it. The algorithm consist in apolynomial number of calls of the preemption version of the LongestTail Heuristic. One of the applicability of the method is to obtainlower bounds for NP-hard one-machine and job-shop schedulingproblems. We present some computational results of thisapplication, followed by some conclusions.
Resumo:
In this paper we propose a Pyramidal Classification Algorithm,which together with an appropriate aggregation index producesan indexed pseudo-hierarchy (in the strict sense) withoutinversions nor crossings. The computer implementation of thealgorithm makes it possible to carry out some simulation testsby Monte Carlo methods in order to study the efficiency andsensitivity of the pyramidal methods of the Maximum, Minimumand UPGMA. The results shown in this paper may help to choosebetween the three classification methods proposed, in order toobtain the classification that best fits the original structureof the population, provided we have an a priori informationconcerning this structure.
Resumo:
We present a simple randomized procedure for the prediction of a binary sequence. The algorithm uses ideas from recent developments of the theory of the prediction of individual sequences. We show that if thesequence is a realization of a stationary and ergodic random process then the average number of mistakes converges, almost surely, to that of the optimum, given by the Bayes predictor.
Resumo:
This paper compares two well known scan matching algorithms: the MbICP and the pIC. As a result of the study, it is proposed the MSISpIC, a probabilistic scan matching algorithm for the localization of an Autonomous Underwater Vehicle (AUV). The technique uses range scans gathered with a Mechanical Scanning Imaging Sonar (MSIS), and the robot displacement estimated through dead-reckoning with the help of a Doppler Velocity Log (DVL) and a Motion Reference Unit (MRU). The proposed method is an extension of the pIC algorithm. Its major contribution consists in: 1) using an EKF to estimate the local path traveled by the robot while grabbing the scan as well as its uncertainty and 2) proposing a method to group into a unique scan, with a convenient uncertainty model, all the data grabbed along the path described by the robot. The algorithm has been tested on an AUV guided along a 600m path within a marina environment with satisfactory results
Resumo:
Nominal Unification is an extension of first-order unification where terms can contain binders and unification is performed modulo α equivalence. Here we prove that the existence of nominal unifiers can be decided in quadratic time. First, we linearly-reduce nominal unification problems to a sequence of freshness and equalities between atoms, modulo a permutation, using ideas as Paterson and Wegman for first-order unification. Second, we prove that solvability of these reduced problems may be checked in quadràtic time. Finally, we point out how using ideas of Brown and Tarjan for unbalanced merging, we could solve these reduced problems more efficiently
Resumo:
Summary Background: We previously derived a clinical prognostic algorithm to identify patients with pulmonary embolism (PE) who are at low-risk of short-term mortality who could be safely discharged early or treated entirely in an outpatient setting. Objectives: To externally validate the clinical prognostic algorithm in an independent patient sample. Methods: We validated the algorithm in 983 consecutive patients prospectively diagnosed with PE at an emergency department of a university hospital. Patients with none of the algorithm's 10 prognostic variables (age >/= 70 years, cancer, heart failure, chronic lung disease, chronic renal disease, cerebrovascular disease, pulse >/= 110/min., systolic blood pressure < 100 mm Hg, oxygen saturation < 90%, and altered mental status) at baseline were defined as low-risk. We compared 30-day overall mortality among low-risk patients based on the algorithm between the validation and the original derivation sample. We also assessed the rate of PE-related and bleeding-related mortality among low-risk patients. Results: Overall, the algorithm classified 16.3% of patients with PE as low-risk. Mortality at 30 days was 1.9% among low-risk patients and did not differ between the validation and the original derivation sample. Among low-risk patients, only 0.6% died from definite or possible PE, and 0% died from bleeding. Conclusions: This study validates an easy-to-use, clinical prognostic algorithm for PE that accurately identifies patients with PE who are at low-risk of short-term mortality. Low-risk patients based on our algorithm are potential candidates for less costly outpatient treatment.
Resumo:
The development and tests of an iterative reconstruction algorithm for emission tomography based on Bayesian statistical concepts are described. The algorithm uses the entropy of the generated image as a prior distribution, can be accelerated by the choice of an exponent, and converges uniformly to feasible images by the choice of one adjustable parameter. A feasible image has been defined as one that is consistent with the initial data (i.e. it is an image that, if truly a source of radiation in a patient, could have generated the initial data by the Poisson process that governs radioactive disintegration). The fundamental ideas of Bayesian reconstruction are discussed, along with the use of an entropy prior with an adjustable contrast parameter, the use of likelihood with data increment parameters as conditional probability, and the development of the new fast maximum a posteriori with entropy (FMAPE) Algorithm by the successive substitution method. It is shown that in the maximum likelihood estimator (MLE) and FMAPE algorithms, the only correct choice of initial image for the iterative procedure in the absence of a priori knowledge about the image configuration is a uniform field.
Resumo:
To improve the yield of the cytogenetic analysis in patients with acute nonlymphocytic leukemia (ANLL), six culture conditions for bone marrow or peripheral blood cells were tested in parallel. Two conditioned media (CM), phytohemagglutinin leukocyte PHA-LCM and 5637 CM, nutritive elements (NE), and methotrexate (MTX) cell synchronization were investigated in 14 patients presenting with either inv(16)/ t(16;16) (group 1, n = 9 patients) or t(15;17) (group 2, n = 5). The criteria used to identify the most favorable culture conditions were the mitotic index (MI), the morphological index (MorI), and the percentage of abnormal metaphases. In the presence of PHA-LCM and 5637 CM, the MI were significantly increased in group 2, whereas in the MTX conditions, MI remained very low in both groups. The values of the MorI did not reveal any significant changes in chromosome resolution between the conditions in either group. The addition of NE did not have a positive effect in quantity or quality of metaphases. Because of the variability of the response of leukemic cells to different stimulations in vitro, several culture conditions in parallel are required to ensure a satisfactory yield of the chromosome analysis in ANLL.
Resumo:
Isolated ventricular non-compaction (IVNC) is a rare, congenital, unclassified cardiomyopathy characterized by prominent trabecular meshwork and deep recesses. Major clinical manifestations of IVNC are heart failure, atrial and ventricular arrhythmias, and thrombo-embolic events. We describe a case of a 69-year-old woman in whom the diagnosis of IVNC was discovered late, whereas former echocardiographic examinations were considered normal. She was known for systolic left ventricular dysfunction for 3 years and then became symptomatic (NYHA III). In the past, she suffered from multiple episodes of deep vein thrombosis and pulmonary embolism. Electrocardiogram revealed a wide QRS complex, and transthoracic echocardiography showed typical apical thickening of the left and right ventricular myocardial wall with two distinct layers. The ratio of non-compacted to compacted myocardium was >2:1. Cardiac MRI confirmed the echocardiographic images. Cerebral MRI revealed multiple ischaemic sequellae. In view of the persistent refractory, heart failure in medical treatment of patients with classical criteria for cardiac re-synchronization therapy, as well as the ventricular arrhythmias, a biventricular automatic intracardiac defibrillator (biventricular ICD) was implanted. The 2-year follow-up period was characterized by improvement of NYHA functional class from III to I and increasing in left ventricular function. We hereby present a case of IVNC with favourable outcome after biventricular ICD implantation. Cardiac re-synchronization therapy could be considered in the management of this pathology.
Resting-state temporal synchronization networks emerge from connectivity topology and heterogeneity.
Resumo:
Spatial patterns of coherent activity across different brain areas have been identified during the resting-state fluctuations of the brain. However, recent studies indicate that resting-state activity is not stationary, but shows complex temporal dynamics. We were interested in the spatiotemporal dynamics of the phase interactions among resting-state fMRI BOLD signals from human subjects. We found that the global phase synchrony of the BOLD signals evolves on a characteristic ultra-slow (<0.01Hz) time scale, and that its temporal variations reflect the transient formation and dissolution of multiple communities of synchronized brain regions. Synchronized communities reoccurred intermittently in time and across scanning sessions. We found that the synchronization communities relate to previously defined functional networks known to be engaged in sensory-motor or cognitive function, called resting-state networks (RSNs), including the default mode network, the somato-motor network, the visual network, the auditory network, the cognitive control networks, the self-referential network, and combinations of these and other RSNs. We studied the mechanism originating the observed spatiotemporal synchronization dynamics by using a network model of phase oscillators connected through the brain's anatomical connectivity estimated using diffusion imaging human data. The model consistently approximates the temporal and spatial synchronization patterns of the empirical data, and reveals that multiple clusters that transiently synchronize and desynchronize emerge from the complex topology of anatomical connections, provided that oscillators are heterogeneous.
Resumo:
We consider stochastic partial differential equations with multiplicative noise. We derive an algorithm for the computer simulation of these equations. The algorithm is applied to study domain growth of a model with a conserved order parameter. The numerical results corroborate previous analytical predictions obtained by linear analysis.