983 resultados para decision algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-input multi-output (MIMO) technology is an emerging solution for high data rate wireless communications. We develop soft-decision based equalization techniques for frequency selective MIMO channels in the quest for low-complexity equalizers with BER performance competitive to that of ML sequence detection. We first propose soft decision equalization (SDE), and demonstrate that decision feedback equalization (DFE) based on soft-decisions, expressed via the posterior probabilities associated with feedback symbols, is able to outperform hard-decision DFE, with a low computational cost that is polynomial in the number of symbols to be recovered, and linear in the signal constellation size. Building upon the probabilistic data association (PDA) multiuser detector, we present two new MIMO equalization solutions to handle the distinctive channel memory. With their low complexity, simple implementations, and impressive near-optimum performance offered by iterative soft-decision processing, the proposed SDE methods are attractive candidates to deliver efficient reception solutions to practical high-capacity MIMO systems. Motivated by the need for low-complexity receiver processing, we further present an alternative low-complexity soft-decision equalization approach for frequency selective MIMO communication systems. With the help of iterative processing, two detection and estimation schemes based on second-order statistics are harmoniously put together to yield a two-part receiver structure: local multiuser detection (MUD) using soft-decision Probabilistic Data Association (PDA) detection, and dynamic noise-interference tracking using Kalman filtering. The proposed Kalman-PDA detector performs local MUD within a sub-block of the received data instead of over the entire data set, to reduce the computational load. At the same time, all the inter-ference affecting the local sub-block, including both multiple access and inter-symbol interference, is properly modeled as the state vector of a linear system, and dynamically tracked by Kalman filtering. Two types of Kalman filters are designed, both of which are able to track an finite impulse response (FIR) MIMO channel of any memory length. The overall algorithms enjoy low complexity that is only polynomial in the number of information-bearing bits to be detected, regardless of the data block size. Furthermore, we introduce two optional performance-enhancing techniques: cross- layer automatic repeat request (ARQ) for uncoded systems and code-aided method for coded systems. We take Kalman-PDA as an example, and show via simulations that both techniques can render error performance that is better than Kalman-PDA alone and competitive to sphere decoding. At last, we consider the case that channel state information (CSI) is not perfectly known to the receiver, and present an iterative channel estimation algorithm. Simulations show that the performance of SDE with channel estimation approaches that of SDE with perfect CSI.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the project, managers encounter numerous contingencies and are faced with the challenging task of making decisions that will effectively keep the project on track. This task is very challenging because construction projects are non-prototypical and the processes are irreversible. Therefore, it is critical to apply a methodological approach to develop a few alternative management decision strategies during the planning phase, which can be deployed to manage alternative scenarios resulting from expected and unexpected disruptions in the as-planned schedule. Such a methodology should have the following features but are missing in the existing research: (1) looking at the effects of local decisions on the global project outcomes, (2) studying how a schedule responds to decisions and disruptive events because the risk in a schedule is a function of the decisions made, (3) establishing a method to assess and improve the management decision strategies, and (4) developing project specific decision strategies because each construction project is unique and the lessons from a particular project cannot be easily applied to projects that have different contexts. The objective of this dissertation is to develop a schedule-based simulation framework to design, assess, and improve sequences of decisions for the execution stage. The contribution of this research is the introduction of applying decision strategies to manage a project and the establishment of iterative methodology to continuously assess and improve decision strategies and schedules. The project managers or schedulers can implement the methodology to develop and identify schedules accompanied by suitable decision strategies to manage a project at the planning stage. The developed methodology also lays the foundation for an algorithm towards continuously automatically generating satisfactory schedule and strategies through the construction life of a project. Different from studying isolated daily decisions, the proposed framework introduces the notion of {em decision strategies} to manage construction process. A decision strategy is a sequence of interdependent decisions determined by resource allocation policies such as labor, material, equipment, and space policies. The schedule-based simulation framework consists of two parts, experiment design and result assessment. The core of the experiment design is the establishment of an iterative method to test and improve decision strategies and schedules, which is based on the introduction of decision strategies and the development of a schedule-based simulation testbed. The simulation testbed used is Interactive Construction Decision Making Aid (ICDMA). ICDMA has an emulator to duplicate the construction process that has been previously developed and a random event generator that allows the decision-maker to respond to disruptions in the emulation. It is used to study how the schedule responds to these disruptions and the corresponding decisions made over the duration of the project while accounting for cascading impacts and dependencies between activities. The dissertation is organized into two parts. The first part presents the existing research, identifies the departure points of this work, and develops a schedule-based simulation framework to design, assess, and improve decision strategies. In the second part, the proposed schedule-based simulation framework is applied to investigate specific research problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Although well-established for suspected lower limb deep venous thrombosis, an algorithm combining a clinical decision score, d-dimer testing, and ultrasonography has not been evaluated for suspected upper extremity deep venous thrombosis (UEDVT). OBJECTIVE To assess the safety and feasibility of a new diagnostic algorithm in patients with clinically suspected UEDVT. DESIGN Diagnostic management study. (ClinicalTrials.gov: NCT01324037) SETTING: 16 hospitals in Europe and the United States. PATIENTS 406 inpatients and outpatients with suspected UEDVT. MEASUREMENTS The algorithm consisted of the sequential application of a clinical decision score, d-dimer testing, and ultrasonography. Patients were first categorized as likely or unlikely to have UEDVT; in those with an unlikely score and normal d-dimer levels, UEDVT was excluded. All other patients had (repeated) compression ultrasonography. The primary outcome was the 3-month incidence of symptomatic UEDVT and pulmonary embolism in patients with a normal diagnostic work-up. RESULTS The algorithm was feasible and completed in 390 of the 406 patients (96%). In 87 patients (21%), an unlikely score combined with normal d-dimer levels excluded UEDVT. Superficial venous thrombosis and UEDVT were diagnosed in 54 (13%) and 103 (25%) patients, respectively. All 249 patients with a normal diagnostic work-up, including those with protocol violations (n = 16), were followed for 3 months. One patient developed UEDVT during follow-up, for an overall failure rate of 0.4% (95% CI, 0.0% to 2.2%). LIMITATIONS This study was not powered to show the safety of the substrategies. d-Dimer testing was done locally. CONCLUSION The combination of a clinical decision score, d-dimer testing, and ultrasonography can safely and effectively exclude UEDVT. If confirmed by other studies, this algorithm has potential as a standard approach to suspected UEDVT. PRIMARY FUNDING SOURCE None.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Postpartum hemorrhage (PPH) is one of the main causes of maternal deaths even in industrialized countries. It represents an emergency situation which necessitates a rapid decision and in particular an exact diagnosis and root cause analysis in order to initiate the correct therapeutic measures in an interdisciplinary cooperation. In addition to established guidelines, the benefits of standardized therapy algorithms have been demonstrated. A therapy algorithm for the obstetric emergency of postpartum hemorrhage in the German language is not yet available. The establishment of an international (Germany, Austria and Switzerland D-A-CH) "treatment algorithm for postpartum hemorrhage" was an interdisciplinary project based on the guidelines of the corresponding specialist societies (anesthesia and intensive care medicine and obstetrics) in the three countries as well as comparable international algorithms for therapy of PPH.The obstetrics and anesthesiology personnel must possess sufficient expertise for emergency situations despite lower case numbers. The rarity of occurrence for individual patients and the life-threatening situation necessitate a structured approach according to predetermined treatment algorithms. This can then be carried out according to the established algorithm. Furthermore, this algorithm presents the opportunity to train for emergency situations in an interdisciplinary team.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Any image processing object detection algorithm somehow tries to integrate the object light (Recognition Step) and applies statistical criteria to distinguish objects of interest from other objects or from pure background (Decision Step). There are various possibilities how these two basic steps can be realized, as can be seen in the different proposed detection methods in the literature. An ideal detection algorithm should provide high recognition sensitiv ity with high decision accuracy and require a reasonable computation effort . In reality, a gain in sensitivity is usually only possible with a loss in decision accuracy and with a higher computational effort. So, automatic detection of faint streaks is still a challenge. This paper presents a detection algorithm using spatial filters simulating the geometrical form of possible streaks on a CCD image. This is realized by image convolution. The goal of this method is to generate a more or less perfect match between a streak and a filter by varying the length and orientation of the filters. The convolution answers are accepted or rejected according to an overall threshold given by the ackground statistics. This approach yields as a first result a huge amount of accepted answers due to filters partially covering streaks or remaining stars. To avoid this, a set of additional acceptance criteria has been included in the detection method. All criteria parameters are justified by background and streak statistics and they affect the detection sensitivity only marginally. Tests on images containing simulated streaks and on real images containing satellite streaks show a very promising sensitivity, reliability and running speed for this detection method. Since all method parameters are based on statistics, the true alarm, as well as the false alarm probability, are well controllable. Moreover, the proposed method does not pose any extraordinary demands on the computer hardware and on the image acquisition process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose the distributed bees algorithm (DBA) for task allocation in a swarm of robots. In the proposed scenario, task allocation consists in assigning the robots to the found targets in a 2-D arena. The expected distribution is obtained from the targets' qualities that are represented as scalar values. Decision-making mechanism is distributed and robots autonomously choose their assignments taking into account targets' qualities and distances. We tested the scalability of the proposed DBA algorithm in terms of number of robots and number of targets. For that, the experiments were performed in the simulator for various sets of parameters, including number of robots, number of targets, and targets' utilities. Control parameters inherent to DBA were tuned to test how they affect the final robot distribution. The simulation results show that by increasing the robot swarm size, the distribution error decreased.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the advancement of both, information technology in general, and databases in particular; data storage devices are becoming cheaper and data processing speed is increasing. As result of this, organizations tend to store large volumes of data holding great potential information. Decision Support Systems, DSS try to use the stored data to obtain valuable information for organizations. In this paper, we use both data models and use cases to represent the functionality of data processing in DSS following Software Engineering processes. We propose a methodology to develop DSS in the Analysis phase, respective of data processing modeling. We have used, as a starting point, a data model adapted to the semantics involved in multidimensional databases or data warehouses, DW. Also, we have taken an algorithm that provides us with all the possible ways to automatically cross check multidimensional model data. Using the aforementioned, we propose diagrams and descriptions of use cases, which can be considered as patterns representing the DSS functionality, in regard to DW data processing, DW on which DSS are based. We highlight the reusability and automation benefits that this can be achieved, and we think this study can serve as a guide in the development of DSS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to propose a multi-criteria optimization and decision-making technique to solve food engineering problems. This technique was demostrated using experimental data obtained on osmotic dehydratation of carrot cubes in a sodium chloride solution. The Aggregating Functions Approach, the Adaptive Random Search Algorithm, and the Penalty Functions Approach were used in this study to compute the initial set of non-dominated or Pareto-optimal solutions. Multiple non-linear regression analysis was performed on a set of experimental data in order to obtain particular multi-objective functions (responses), namely water loss, solute gain, rehydration ratio, three different colour criteria of rehydrated product, and sensory evaluation (organoleptic quality). Two multi-criteria decision-making approaches, the Analytic Hierarchy Process (AHP) and the Tabular Method (TM), were used simultaneously to choose the best alternative among the set of non-dominated solutions. The multi-criteria optimization and decision-making technique proposed in this study can facilitate the assessment of criteria weights, giving rise to a fairer, more consistent, and adequate final compromised solution or food process. This technique can be useful to food scientists in research and education, as well as to engineers involved in the improvement of a variety of food engineering processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic blood glucose classification may help specialists to provide a better interpretation of blood glucose data, downloaded directly from patients glucose meter and will contribute in the development of decision support systems for gestational diabetes. This paper presents an automatic blood glucose classifier for gestational diabetes that compares 6 different feature selection methods for two machine learning algorithms: neural networks and decision trees. Three searching algorithms, Greedy, Best First and Genetic, were combined with two different evaluators, CSF and Wrapper, for the feature selection. The study has been made with 6080 blood glucose measurements from 25 patients. Decision trees with a feature set selected with the Wrapper evaluator and the Best first search algorithm obtained the best accuracy: 95.92%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We apply diffusion strategies to propose a cooperative reinforcement learning algorithm, in which agents in a network communicate with their neighbors to improve predictions about their environment. The algorithm is suitable to learn off-policy even in large state spaces. We provide a mean-square-error performance analysis under constant step-sizes. The gain of cooperation in the form of more stability and less bias and variance in the prediction error, is illustrated in the context of a classical model. We show that the improvement in performance is especially significant when the behavior policy of the agents is different from the target policy under evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The global economic structure, with its decentralized production and the consequent increase in freight traffic all over the world, creates considerable problems and challenges for the freight transport sector. This situation has led shipping to become the most suitable and cheapest way to transport goods. Thus, ports are configured as nodes with critical importance in the logistics supply chain as a link between two transport systems, sea and land. Increase in activity at seaports is producing three undesirable effects: increasing road congestion, lack of open space in port installations and a significant environmental impact on seaports. These adverse effects can be mitigated by moving part of the activity inland. Implementation of dry ports is a possible solution and would also provide an opportunity to strengthen intermodal solutions as part of an integrated and more sustainable transport chain, acting as a link between road and railway networks. In this sense, implementation of dry ports allows the separation of the links of the transport chain, thus facilitating the shortest possible routes for the lowest capacity and most polluting means of transport. Thus, the decision of where to locate a dry port demands a thorough analysis of the whole logistics supply chain, with the objective of transferring the largest volume of goods possible from road to more energy efficient means of transport, like rail or short-sea shipping, that are less harmful to the environment. However, the decision of where to locate a dry port must also ensure the sustainability of the site. Thus, the main goal of this article is to research the variables influencing the sustainability of dry port location and how this sustainability can be evaluated. With this objective, in this paper we present a methodology for assessing the sustainability of locations by the use of Multi-Criteria Decision Analysis (MCDA) and Bayesian Networks (BNs). MCDA is used as a way to establish a scoring, whilst BNs were chosen to eliminate arbitrariness in setting the weightings using a technique that allows us to prioritize each variable according to the relationships established in the set of variables. In order to determine the relationships between all the variables involved in the decision, giving us the importance of each factor and variable, we built a K2 BN algorithm. To obtain the scores of each variable, we used a complete cartography analysed by ArcGIS. Recognising that setting the most appropriate location to place a dry port is a geographical multidisciplinary problem, with significant economic, social and environmental implications, we consider 41 variables (grouped into 17 factors) which respond to this need. As a case of study, the sustainability of all of the 10 existing dry ports in Spain has been evaluated. In this set of logistics platforms, we found that the most important variables for achieving sustainability are those related to environmental protection, so the sustainability of the locations requires a great respect for the natural environment and the urban environment in which they are framed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an extension of the logic outer-approximation algorithm for dealing with disjunctive discrete-continuous optimal control problems whose dynamic behavior is modeled in terms of differential-algebraic equations. Although the proposed algorithm can be applied to a wide variety of discrete-continuous optimal control problems, we are mainly interested in problems where disjunctions are also present. Disjunctions are included to take into account only certain parts of the underlying model which become relevant under some processing conditions. By doing so the numerical robustness of the optimization algorithm improves since those parts of the model that are not active are discarded leading to a reduced size problem and avoiding potential model singularities. We test the proposed algorithm using three examples of different complex dynamic behavior. In all the case studies the number of iterations and the computational effort required to obtain the optimal solutions is modest and the solutions are relatively easy to find.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Outliers are objects that show abnormal behavior with respect to their context or that have unexpected values in some of their parameters. In decision-making processes, information quality is of the utmost importance. In specific applications, an outlying data element may represent an important deviation in a production process or a damaged sensor. Therefore, the ability to detect these elements could make the difference between making a correct and an incorrect decision. This task is complicated by the large sizes of typical databases. Due to their importance in search processes in large volumes of data, researchers pay special attention to the development of efficient outlier detection techniques. This article presents a computationally efficient algorithm for the detection of outliers in large volumes of information. This proposal is based on an extension of the mathematical framework upon which the basic theory of detection of outliers, founded on Rough Set Theory, has been constructed. From this starting point, current problems are analyzed; a detection method is proposed, along with a computational algorithm that allows the performance of outlier detection tasks with an almost-linear complexity. To illustrate its viability, the results of the application of the outlier-detection algorithm to the concrete example of a large database are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Power systems are large scale nonlinear systems with high complexity. Various optimization techniques and expert systems have been used in power system planning. However, there are always some factors that cannot be quantified, modeled, or even expressed by expert systems. Moreover, such planning problems are often large scale optimization problems. Although computational algorithms that are capable of handling large dimensional problems can be used, the computational costs are still very high. To solve these problems, in this paper, investigation is made to explore the efficiency and effectiveness of combining mathematic algorithms with human intelligence. It had been discovered that humans can join the decision making progresses by cognitive feedback. Based on cognitive feedback and genetic algorithm, a new algorithm called cognitive genetic algorithm is presented. This algorithm can clarify and extract human's cognition. As an important application of this cognitive genetic algorithm, a practical decision method for power distribution system planning is proposed. By using this decision method, the optimal results that satisfy human expertise can be obtained and the limitations of human experts can be minimized in the mean time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Refraction simulators used for undergraduate training at Aston University did not realistically reflect variations in the relationship between vision and ametropia. This was because they used an algorithm, taken from the research literature, that strictly only applied to myopes or older hyperopes and did not factor in age and pupil diameter. The aim of this study was to generate new algorithms that overcame these limitations. Clinical data were collected from the healthy right eyes of 873 white subjects aged between 20 and 70 years. Vision and refractive error were recorded along with age and pupil diameter. Re-examination of 34 subjects enabled the calculation of coefficients of repeatability. The study population was slightly biased towards females and included many contact lens wearers. Sex and contact lens wear were, therefore, recorded in order to determine whether these might influence the findings. In addition, iris colour and cylinder axis orientation were recorded as these might also be influential. A novel Blur Sensitivity Ratio (BSR) was derived by dividing vision (expressed as minimum angle of resolution) by refractive error (expressed as a scalar vector, U). Alteration of the scalar vector, to account for additional vision reduction due to oblique cylinder axes, was not found to be useful. Decision tree analysis showed that sex, contact lens wear, iris colour and cylinder axis orientation did not influence the BSR. The following algorithms arose from two stepwise multiple linear regressions: BSR (myopes) = 1.13 + (0.24 x pupil diameter) + (0.14 x U) BSR (hyperopes) = (0.11 x pupil diameter) + (0.03 x age) - 0.22 These algorithms together accounted for 84% of the observed variance. They showed that pupil diameter influenced vision in both forms of ametropia. They also showed the age-related decline in the ability to accommodate in order to overcome reduced vision in hyperopia.