899 resultados para the SIMPLE algorithm


Relevância:

90.00% 90.00%

Publicador:

Resumo:

ABSTRACT Title of Dissertation: A BETTER PLACE TO BE: REPUBLICANISM AS AN ALTENATIVE TO THE AUTHORITARIANISM-DEMOCRACY DICHOTOMY Christopher Ronald Binetti, Doctor of Philosophy, and 2016 Dissertation directed by: Dr. Charled Frederick Alford, Department of Government and Politics In this dissertation, I argue that in modern or ancient regimes, the simple dichotomy between democracies and autocracies/dictatorships is both factually wrong and problematic for policy purposes. It is factually wrong because regimes between the two opposite regime types exist and it is problematic because the either/or dichotomy leads to extreme thinking in terms of nation-building in places like Afghanistan. In planning for Afghanistan, the argument is that either we can quickly nation-build it into a liberal democracy or else we must leave it in the hands of a despotic dictator. This is a false choice created by both a faulty categorization of regime types and most importantly, a failure to understand history. History shows us that the republic is a regime type that defies the authoritarian-democracy dichotomy. A republic by my definition is a non-dominating regime, characterized by a (relative) lack of domination by any one interest group or actor, mostly non-violent competition for power among various interest groups/factions, the ability of factions/interest groups/individual actors to continue to legitimately play the political game even after electoral or issue-area defeat and some measure of effectiveness. Thus, a republic is a system of government that has institutions, laws, norms, attitudes, and beliefs that minimize the violation of the rule of law and monopolization of power by one individual or group as much as possible. These norms, laws, attitudes, and beliefs ae essential to the republican system in that they make those institutions that check and balance power work. My four cases are Assyria, Persia, Venice and Florence. Assyria and Persia are ancient regimes, the first was a republic and then became the frightening opposite of a republic, while the latter was a good republic for a long time, but had effectiveness issues towards the end. Venice is a classical example of a medieval or early modern republic, which was very inspirational to Madison and others in building republican America. Florence is the example of a medieval republic that fell to despotism, as immortalized by Machiavelli’s writings. In all of these examples, I test certain alternative hypotheses as well as my own.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

During our earlier research, it was recognised that in order to be successful with an indirect genetic algorithm approach using a decoder, the decoder has to strike a balance between being an optimiser in its own right and finding feasible solutions. Previously this balance was achieved manually. Here we extend this by presenting an automated approach where the genetic algorithm itself, simultaneously to solving the problem, sets weights to balance the components out. Subsequently we were able to solve a complex and non-linear scheduling problem better than with a standard direct genetic algorithm implementation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We build a system to support search and visualization on heterogeneous information networks. We first build our system on a specialized heterogeneous information network: DBLP. The system aims to facilitate people, especially computer science researchers, toward a better understanding and user experience about academic information networks. Then we extend our system to the Web. Our results are much more intuitive and knowledgeable than the simple top-k blue links from traditional search engines, and bring more meaningful structural results with correlated entities. We also investigate the ranking algorithm, and we show that the personalized PageRank and proposed Hetero-personalized PageRank outperform the TF-IDF ranking or mixture of TF-IDF and authority ranking. Our work opens several directions for future research.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introduction Prediction of soft tissue changes following orthognathic surgery has been frequently attempted in the past decades. It has gradually progressed from the classic “cut and paste” of photographs to the computer assisted 2D surgical prediction planning; and finally, comprehensive 3D surgical planning was introduced to help surgeons and patients to decide on the magnitude and direction of surgical movements as well as the type of surgery to be considered for the correction of facial dysmorphology. A wealth of experience was gained and numerous published literature is available which has augmented the knowledge of facial soft tissue behaviour and helped to improve the ability to closely simulate facial changes following orthognathic surgery. This was particularly noticed following the introduction of the three dimensional imaging into the medical research and clinical applications. Several approaches have been considered to mathematically predict soft tissue changes in three dimensions, following orthognathic surgery. The most common are the Finite element model and Mass tensor Model. These were developed into software packages which are currently used in clinical practice. In general, these methods produce an acceptable level of prediction accuracy of soft tissue changes following orthognathic surgery. Studies, however, have shown a limited prediction accuracy at specific regions of the face, in particular the areas around the lips. Aims The aim of this project is to conduct a comprehensive assessment of hard and soft tissue changes following orthognathic surgery and introduce a new method for prediction of facial soft tissue changes.   Methodology The study was carried out on the pre- and post-operative CBCT images of 100 patients who received their orthognathic surgery treatment at Glasgow dental hospital and school, Glasgow, UK. Three groups of patients were included in the analysis; patients who underwent Le Fort I maxillary advancement surgery; bilateral sagittal split mandibular advancement surgery or bimaxillary advancement surgery. A generic facial mesh was used to standardise the information obtained from individual patient’s facial image and Principal component analysis (PCA) was applied to interpolate the correlations between the skeletal surgical displacement and the resultant soft tissue changes. The identified relationship between hard tissue and soft tissue was then applied on a new set of preoperative 3D facial images and the predicted results were compared to the actual surgical changes measured from their post-operative 3D facial images. A set of validation studies was conducted. To include: • Comparison between voxel based registration and surface registration to analyse changes following orthognathic surgery. The results showed there was no statistically significant difference between the two methods. Voxel based registration, however, showed more reliability as it preserved the link between the soft tissue and skeletal structures of the face during the image registration process. Accordingly, voxel based registration was the method of choice for superimposition of the pre- and post-operative images. The result of this study was published in a refereed journal. • Direct DICOM slice landmarking; a novel technique to quantify the direction and magnitude of skeletal surgical movements. This method represents a new approach to quantify maxillary and mandibular surgical displacement in three dimensions. The technique includes measuring the distance of corresponding landmarks digitized directly on DICOM image slices in relation to three dimensional reference planes. The accuracy of the measurements was assessed against a set of “gold standard” measurements extracted from simulated model surgery. The results confirmed the accuracy of the method within 0.34mm. Therefore, the method was applied in this study. The results of this validation were published in a peer refereed journal. • The use of a generic mesh to assess soft tissue changes using stereophotogrammetry. The generic facial mesh played a major role in the soft tissue dense correspondence analysis. The conformed generic mesh represented the geometrical information of the individual’s facial mesh on which it was conformed (elastically deformed). Therefore, the accuracy of generic mesh conformation is essential to guarantee an accurate replica of the individual facial characteristics. The results showed an acceptable overall mean error of the conformation of generic mesh 1 mm. The results of this study were accepted for publication in peer refereed scientific journal. Skeletal tissue analysis was performed using the validated “Direct DICOM slices landmarking method” while soft tissue analysis was performed using Dense correspondence analysis. The analysis of soft tissue was novel and produced a comprehensive description of facial changes in response to orthognathic surgery. The results were accepted for publication in a refereed scientific Journal. The main soft tissue changes associated with Le Fort I were advancement at the midface region combined with widening of the paranasal, upper lip and nostrils. Minor changes were noticed at the tip of the nose and oral commissures. The main soft tissue changes associated with mandibular advancement surgery were advancement and downward displacement of the chin and lower lip regions, limited widening of the lower lip and slight reversion of the lower lip vermilion combined with minimal backward displacement of the upper lip were recorded. Minimal changes were observed on the oral commissures. The main soft tissue changes associated with bimaxillary advancement surgery were generalized advancement of the middle and lower thirds of the face combined with widening of the paranasal, upper lip and nostrils regions. In Le Fort I cases, the correlation between the changes of the facial soft tissue and the skeletal surgical movements was assessed using PCA. A statistical method known as ’Leave one out cross validation’ was applied on the 30 cases which had Le Fort I osteotomy surgical procedure to effectively utilize the data for the prediction algorithm. The prediction accuracy of soft tissue changes showed a mean error ranging between (0.0006mm±0.582) at the nose region to (-0.0316mm±2.1996) at the various facial regions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a new type of genetic algorithm for the set covering problem. It differs from previous evolutionary approaches first because it is an indirect algorithm, i.e. the actual solutions are found by an external decoder function. The genetic algorithm itself provides this decoder with permutations of the solution variables and other parameters. Second, it will be shown that results can be further improved by adding another indirect optimisation layer. The decoder will not directly seek out low cost solutions but instead aims for good exploitable solutions. These are then post optimised by another hill-climbing algorithm. Although seemingly more complicated, we will show that this three-stage approach has advantages in terms of solution quality, speed and adaptability to new types of problems over more direct approaches. Extensive computational results are presented and compared to the latest evolutionary and other heuristic approaches to the same data instances.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Our research has shown that schedules can be built mimicking a human scheduler by using a set of rules that involve domain knowledge. This chapter presents a Bayesian Optimization Algorithm (BOA) for the nurse scheduling problem that chooses such suitable scheduling rules from a set for each nurse’s assignment. Based on the idea of using probabilistic models, the BOA builds a Bayesian network for the set of promising solutions and samples these networks to generate new candidate solutions. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed algorithm may be suitable for other scheduling problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The influence of uniaxial stress upon three types of imperfections occurring in the alkali halide crystal lattice has been investigated. The imperfections are the interstitial atom, the interstitial ion, and the negative ion vacancy. The interstitial atom, or H center, is a paraelastic defect which assumes a preferential crystal orientation in the field of an external mechanical stress. From the results of the reorientation kinetics - studies, it was possible to show that H centers are not stable in the KBr crystal lattice above 2SoK. At temperatures higher than 2SoK, the H centers are transformed into two new paraelastic defects, H(ii) and H(iii), possessing the same optical absorption band as the H center but differing both from the H' center and from each other in their reorientation kinetics. A study of the wavelength dependence of the H, H(ii), and VI (Na+) centers s~owed the 'existence of three similar-polarized transitions for each of these defects. One of these transitions, located at 230 run for all of the defects studied, was determined to be too high in energy to be explained by the simple X2 - level scheme. In addition, a comparison of various properties of the four defects indicates that the last three can be described as perturbed H centers. Dichroism measurements, performed as a function of temperature and wavelengt, h on the 230-nm I band in KBr, showed this band to be a composite of a band at 234 nm due to the I center and a band at 230 nm attributed to the H center. The I center dichroism was isolated and was observed under various experimental conditions. The results of these observations are consistent with a body-centered model for the I center in which the I-center absorption band is attributed to the excitation of a p-like electron on the interstitial Br- ion. Similar measurements were also perfonned on the a band in KI. The a-band dichroism measurements were found to be consistent with an electronic transition from an s-like ground state to a p-like excited state, indicating that the a center is best described as a quasi-molecule.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

libtissue is a software system for implementing and testing AIS algorithms on real-world computer security problems. AIS algorithms are implemented as a collection of cells, antigen and signals interacting within a tissue compartment. Input data to the tissue comes in the form of realtime events generated by sensors monitoring a system under surveillance, and cells are actively able to affect the monitored system through response mechanisms. libtissue is being used by researchers on a project at the University of Nottingham to explore the application of a range of immune-inspired algorithms to problems in intrusion detection. This talk describes the architecture and design of libtissue, along with the implementation of a simple algorithm and its application to a computer security problem.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An indirect genetic algorithm for the non-unicost set covering problem is presented. The algorithm is a two-stage meta-heuristic, which in the past was successfully applied to similar multiple-choice optimisation problems. The two stages of the algorithm are an ‘indirect’ genetic algorithm and a decoder routine. First, the solutions to the problem are encoded as permutations of the rows to be covered, which are subsequently ordered by the genetic algorithm. Fitness assignment is handled by the decoder, which transforms the permutations into actual solutions to the set covering problem. This is done by exploiting both problem structure and problem specific information. However, flexibility is retained by a self-adjusting element within the decoder, which allows adjustments to both the data and to stages within the search process. Computational results are presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

During our earlier research, it was recognised that in order to be successful with an indirect genetic algorithm approach using a decoder, the decoder has to strike a balance between being an optimiser in its own right and finding feasible solutions. Previously this balance was achieved manually. Here we extend this by presenting an automated approach where the genetic algorithm itself, simultaneously to solving the problem, sets weights to balance the components out. Subsequently we were able to solve a complex and non-linear scheduling problem better than with a standard direct genetic algorithm implementation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Unmanned aerial vehicles (UAVs) frequently operate in partially or entirely unknown environments. As the vehicle traverses the environment and detects new obstacles, rapid path replanning is essential to avoid collisions. This thesis presents a new algorithm called Hierarchical D* Lite (HD*), which combines the incremental algorithm D* Lite with a novel hierarchical path planning approach to replan paths sufficiently fast for real-time operation. Unlike current hierarchical planning algorithms, HD* does not require map corrections before planning a new path. Directional cost scale factors, path smoothing, and Catmull-Rom splines are used to ensure the resulting paths are feasible. HD* sacrifices optimality for real-time performance. Its computation time and path quality are dependent on the map size, obstacle density, sensor range, and any restrictions on planning time. For the most complex scenarios tested, HD* found paths within 10% of optimal in under 35 milliseconds.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A new type of space debris was recently discovered by Schildknecht in near -geosynchronous orbit (GEO). These objects were later identified as exhibiting properties associated with High Area-to-Mass ratio (HAMR) objects. According to their brightness magnitudes (light curve), high rotation rates and composition properties (albedo, amount of specular and diffuse reflection, colour, etc), it is thought that these objects are multilayer insulation (MLI). Observations have shown that this debris type is very sensitive to environmental disturbances, particularly solar radiation pressure, due to the fact that their shapes are easily deformed leading to changes in the Area-to-Mass ratio (AMR) over time. This thesis proposes a simple effective flexible model of the thin, deformable membrane with two different methods. Firstly, this debris is modelled with Finite Element Analysis (FEA) by using Bernoulli-Euler theory called “Bernoulli model”. The Bernoulli model is constructed with beam elements consisting 2 nodes and each node has six degrees of freedom (DoF). The mass of membrane is distributed in beam elements. Secondly, the debris based on multibody dynamics theory call “Multibody model” is modelled as a series of lump masses, connected through flexible joints, representing the flexibility of the membrane itself. The mass of the membrane, albeit low, is taken into account with lump masses in the joints. The dynamic equations for the masses, including the constraints defined by the connecting rigid rod, are derived using fundamental Newtonian mechanics. The physical properties of both flexible models required by the models (membrane density, reflectivity, composition, etc.), are assumed to be those of multilayer insulation. Both flexible membrane models are then propagated together with classical orbital and attitude equations of motion near GEO region to predict the orbital evolution under the perturbations of solar radiation pressure, Earth’s gravity field, luni-solar gravitational fields and self-shadowing effect. These results are then compared to two rigid body models (cannonball and flat rigid plate). In this investigation, when comparing with a rigid model, the evolutions of orbital elements of the flexible models indicate the difference of inclination and secular eccentricity evolutions, rapid irregular attitude motion and unstable cross-section area due to a deformation over time. Then, the Monte Carlo simulations by varying initial attitude dynamics and deformed angle are investigated and compared with rigid models over 100 days. As the results of the simulations, the different initial conditions provide unique orbital motions, which is significantly different in term of orbital motions of both rigid models. Furthermore, this thesis presents a methodology to determine the material dynamic properties of thin membranes and validates the deformation of the multibody model with real MLI materials. Experiments are performed in a high vacuum chamber (10-4 mbar) replicating space environment. A thin membrane is hinged at one end but free at the other. The free motion experiment, the first experiment, is a free vibration test to determine the damping coefficient and natural frequency of the thin membrane. In this test, the membrane is allowed to fall freely in the chamber with the motion tracked and captured through high velocity video frames. A Kalman filter technique is implemented in the tracking algorithm to reduce noise and increase the tracking accuracy of the oscillating motion. The forced motion experiment, the last test, is performed to determine the deformation characteristics of the object. A high power spotlight (500-2000W) is used to illuminate the MLI and the displacements are measured by means of a high resolution laser sensor. Finite Element Analysis (FEA) and multibody dynamics of the experimental setups are used for the validation of the flexible model by comparing with the experimental results of displacements and natural frequencies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this work is to demonstrate and to assess a simple algorithm for automatic estimation of the most salient region in an image, that have possible application in computer vision. The algorithm uses the connection between color dissimilarities in the image and the image’s most salient region. The algorithm also avoids using image priors. Pixel dissimilarity is an informal function of the distance of a specific pixel’s color to other pixels’ colors in an image. We examine the relation between pixel color dissimilarity and salient region detection on the MSRA1K image dataset. We propose a simple algorithm for salient region detection through random pixel color dissimilarity. We define dissimilarity by accumulating the distance between each pixel and a sample of n other random pixels, in the CIELAB color space. An important result is that random dissimilarity between each pixel and just another pixel (n = 1) is enough to create adequate saliency maps when combined with median filter, with competitive average performance if compared with other related methods in the saliency detection research field. The assessment was performed by means of precision-recall curves. This idea is inspired on the human attention mechanism that is able to choose few specific regions to focus on, a biological system that the computer vision community aims to emulate. We also review some of the history on this topic of selective attention.