933 resultados para Low Autocorrelation Binary Sequence Problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The aim of this study was the evaluation of a fast Gradient Spin Echo Technique (GraSE) for cardiac T2-mapping, combining a robust estimation of T2 relaxation times with short acquisition times. The sequence was compared against two previously introduced T2-mapping techniques in a phantom and in vivo. Methods: Phantom experiments were performed at 1.5 T using a commercially available cylindrical gel phantom. Three different T2-mapping techniques were compared: a Multi Echo Spin Echo (MESE; serving as a reference), a T2-prepared balanced Steady State Free Precession (T2prep) and a Gradient Spin Echo sequence. For the subsequent in vivo study, 12 healthy volunteers were examined on a clinical 1.5 T scanner. The three T2-mapping sequences were performed at three short-axis slices. Global myocardial T2 relaxation times were calculated and statistical analysis was performed. For assessment of pixel-by-pixel homogeneity, the number of segments showing an inhomogeneous T2 value distribution, as defined by a pixel SD exceeding 20 % of the corresponding observed T2 time, was counted. Results: Phantom experiments showed a greater difference of measured T2 values between T2prep and MESE than between GraSE and MESE, especially for species with low T1 values. Both, GraSE and T2prep resulted in an overestimation of T2 times compared to MESE. In vivo, significant differences between mean T2 times were observed. In general, T2prep resulted in lowest (52.4 +/- 2.8 ms) and GraSE in highest T2 estimates (59.3 +/- 4.0 ms). Analysis of pixel-by-pixel homogeneity revealed the least number of segments with inhomogeneous T2 distribution for GraSE-derived T2 maps. Conclusions: The GraSE sequence is a fast and robust sequence, combining advantages of both MESE and T2prep techniques, which promises to enable improved clinical applicability of T2-mapping in the future. Our study revealed significant differences of derived mean T2 values when applying different sequence designs. Therefore, a systematic comparison of different cardiac T2-mapping sequences and the establishment of dedicated reference values should be the goal of future studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract: This paper reports a lot-sizing and scheduling problem, which minimizes inventory and backlog costs on m parallel machines with sequence-dependent set-up times over t periods. Problem solutions are represented as product subsets ordered and/or unordered for each machine m at each period t. The optimal lot sizes are determined applying a linear program. A genetic algorithm searches either over ordered or over unordered subsets (which are implicitly ordered using a fast ATSP-type heuristic) to identify an overall optimal solution. Initial computational results are presented, comparing the speed and solution quality of the ordered and unordered genetic algorithm approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Bayesian optimisation algorithm for a nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. When a human scheduler works, he normally builds a schedule systematically following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not yet completed, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this paper, we design a more human-like scheduling algorithm, by using a Bayesian optimisation algorithm to implement explicit learning from past solutions. A nurse scheduling problem from a UK hospital is used for testing. Unlike our previous work that used Genetic Algorithms to implement implicit learning [1], the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The Bayesian optimisation algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, new rule strings have been obtained. Sets of rule strings are generated in this way, some of which will replace previous strings based on fitness. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. For clarity, consider the following toy example of scheduling five nurses with two rules (1: random allocation, 2: allocate nurse to low-cost shifts). In the beginning of the search, the probabilities of choosing rule 1 or 2 for each nurse is equal, i.e. 50%. After a few iterations, due to the selection pressure and reinforcement learning, we experience two solution pathways: Because pure low-cost or random allocation produces low quality solutions, either rule 1 is used for the first 2-3 nurses and rule 2 on remainder or vice versa. In essence, Bayesian network learns 'use rule 2 after 2-3x using rule 1' or vice versa. It should be noted that for our and most other scheduling problems, the structure of the network model is known and all variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus, learning can amount to 'counting' in the case of multinomial distributions. For our problem, we use our rules: Random, Cheapest Cost, Best Cover and Balance of Cost and Cover. In more detail, the steps of our Bayesian optimisation algorithm for nurse scheduling are: 1. Set t = 0, and generate an initial population P(0) at random; 2. Use roulette-wheel selection to choose a set of promising rule strings S(t) from P(t); 3. Compute conditional probabilities of each node according to this set of promising solutions; 4. Assign each nurse using roulette-wheel selection based on the rules' conditional probabilities. A set of new rule strings O(t) will be generated in this way; 5. Create a new population P(t+1) by replacing some rule strings from P(t) with O(t), and set t = t+1; 6. If the termination conditions are not met (we use 2000 generations), go to step 2. Computational results from 52 real data instances demonstrate the success of this approach. They also suggest that the learning mechanism in the proposed approach might be suitable for other scheduling problems. Another direction for further research is to see if there is a good constructing sequence for individual data instances, given a fixed nurse scheduling order. If so, the good patterns could be recognized and then extracted as new domain knowledge. Thus, by using this extracted knowledge, we can assign specific rules to the corresponding nurses beforehand, and only schedule the remaining nurses with all available rules, making it possible to reduce the solution space. Acknowledgements The work was funded by the UK Government's major funding agency, Engineering and Physical Sciences Research Council (EPSRC), under grand GR/R92899/01. References [1] Aickelin U, "An Indirect Genetic Algorithm for Set Covering Problems", Journal of the Operational Research Society, 53(10): 1118-1126,

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Poverty is a multi-dimensional socio-economic problem in most sub-Saharan African countries. The purpose of this study is to analyse the relationship between household size and poverty in low-income communities. The Northern Free State region in South Africa was selected as the study region. A sample of approximately 2 900 households was randomly selected within 12 poor communities in the region. A poverty line was calculated and 74% of all households were found to live below the poverty line. The Pearson’s chi-square test indicated a positive relationship between household size and poverty in eleven of the twelve low-income communities. Households below the poverty line presented larger households than those households above the poverty line. This finding is in contradiction with some findings in other African countries due to the fact that South Africa has higher levels of modernisation with less access to land for subsistence farming. Effective provision of basic needs, community facilities and access to assets such as land could assist poor households with better quality of life. Poor households also need to be granted access to economic opportunities, while also receiving adult education regarding financial management and reproductive health.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study is to investigate the effectiveness of problem-based learning (PBL) on students’ mathematical performance. This includes mathematics achievement and students’ attitudes towards mathematics for third and eighth grade students in Saudi Arabia. Mathematics achievement includes, knowing, applying, and reasoning domains, while students’ attitudes towards mathematics covers, ‘Like learning mathematics’, ‘value mathematics’, and ‘a confidence to learn mathematics’. This study goes deeper to examine the interaction of a PBL teaching strategy, with trained face-to-face and self-directed learning teachers, on students’ performance (mathematics achievement and attitudes towards mathematics). It also examines the interaction between different ability levels of students (high and low levels) with a PBL teaching strategy (with trained face-to-face or self-directed learning teachers) on students’ performance. It draws upon findings and techniques of the TIMSS international benchmarking studies. Mixed methods are used to analyse the quasi-experimental study data. One -way ANOVA, Mixed ANOVA, and paired t-tests models are used to analyse quantitative data, while a semi-structured interview with teachers, and author’s observations are used to enrich understanding of PBL and mathematical performance. The findings show that the PBL teaching strategy significantly improves students’ knowledge application, and is better than the traditional teaching methods among third grade students. This improvement, however, occurred only with the trained face-to-face teacher’s group. Furthermore, there is robust evidence that using a PBL teaching strategy could raise significantly students’ liking of learning mathematics, and confidence to learn mathematics, more than traditional teaching methods among third grade students. Howe ver, there was no evidence that PBL could improve students’ performance (mathematics achievement and attitudes towards mathematics), more than traditional teaching methods, among eighth grade students. In 8th grade, the findings for low achieving students show significant improvement compared to high achieving students, whether PBL is applied or not. However, for 3th grade students, no significant difference in mathematical achievement between high and low achieving students was found. The results were not expected for high achieving students and this is also discussed. The implications of these findings for mathematics education in Saudi Arabia are considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The size of online image datasets is constantly increasing. Considering an image dataset with millions of images, image retrieval becomes a seemingly intractable problem for exhaustive similarity search algorithms. Hashing methods, which encodes high-dimensional descriptors into compact binary strings, have become very popular because of their high efficiency in search and storage capacity. In the first part, we propose a multimodal retrieval method based on latent feature models. The procedure consists of a nonparametric Bayesian framework for learning underlying semantically meaningful abstract features in a multimodal dataset, a probabilistic retrieval model that allows cross-modal queries and an extension model for relevance feedback. In the second part, we focus on supervised hashing with kernels. We describe a flexible hashing procedure that treats binary codes and pairwise semantic similarity as latent and observed variables, respectively, in a probabilistic model based on Gaussian processes for binary classification. We present a scalable inference algorithm with the sparse pseudo-input Gaussian process (SPGP) model and distributed computing. In the last part, we define an incremental hashing strategy for dynamic databases where new images are added to the databases frequently. The method is based on a two-stage classification framework using binary and multi-class SVMs. The proposed method also enforces balance in binary codes by an imbalance penalty to obtain higher quality binary codes. We learn hash functions by an efficient algorithm where the NP-hard problem of finding optimal binary codes is solved via cyclic coordinate descent and SVMs are trained in a parallelized incremental manner. For modifications like adding images from an unseen class, we propose an incremental procedure for effective and efficient updates to the previous hash functions. Experiments on three large-scale image datasets demonstrate that the incremental strategy is capable of efficiently updating hash functions to the same retrieval performance as hashing from scratch.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: To identify reasons for neonatal admission and death with the aim of determining areas needing improvement. Method: A retrospective chart review was conducted on records for neonates admitted to Mulago National Referral Hospital Special Care Baby Unit (SCBU) from 1st November 2013 to 31st January 2014. Final diagnosis was generated after analyzing sequence of clinical course by 2 paediatricians. Results: A total of 1192 neonates were admitted. Majority 83.3% were in-born. Main reasons for admissions were prematurity (37.7%) and low APGAR (27.9%).Overall mortality was 22.1% (Out-born 33.6%; in born 19.8%). Half (52%) of these deaths occurred in the first 24 hours of admission. Major contributors to mortality were prematurity with hypothermia and respiratory distress (33.7%) followed by birth asphyxia with HIE grade III (24.6%) and presumed sepsis (8.7%). Majority of stable at risk neonates 318/330 (i.e. low APGAR or prematurity without comorbidity) survived. Factors independently associated with death included gestational age <30 weeks (p 0.002), birth weight <1500g (p 0.007) and a 5 minute APGAR score of < 7 (p 0.001). Neither place of birth nor delayed and after hour admissions were independently associated with mortality. Conclusion and recommendations: Mortality rate in SCBU is high. Prematurity and its complications were major contributors to mortality. The management of hypothermia and respiratory distress needs scaling up. A step down unit for monitoring stable at risk neonates is needed in order to decongest SCBU.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Children who have experienced a traumatic brain injury (TBI) are at risk for a variety of maladaptive cognitive, behavioral and social outcomes (Yeates et al., 2007). Research involving the social problem solving (SPS) abilities of children with TBI indicates a preference for lower level strategies when compared to children who have experienced an orthopedic injury (OI; Hanten et al., 2008, 2011). Research on SPS in non-injured populations has highlighted the significance of the identity of the social partner (Rubin et al., 2006). Within the pediatric TBI literature few studies have utilized friends as the social partner in SPS contexts, and fewer have used in-vivo SPS assessments. The current study aimed to build on existing research of SPS in children with TBI by utilizing an observational coding scheme to capture in-vivo problem solving behaviors between children with TBI and a best friend. The current study included children with TBI (n = 41), children with OI (n = 43), and a non-injured typically developing group (n = 41). All participants were observed completing a task with a friend and completed a measure of friendship quality. SPS was assessed using an observational coding scheme that captured SPS goals, strategies, and outcomes. It was expected children with TBI would produce fewer successes, fewer direct strategies, and more avoidant strategies. ANOVAs tested for group differences in SPS successes, direct strategies and avoidant strategies. Analyses were run to see if positive or negative friendship quality moderated the relation between group type and SPS behaviors. Group differences were found between the TBI and non-injured group in the SPS direct strategy of commands. No group differences were found for other SPS outcome variables of interest. Moderation analyses partially supported study hypotheses regarding the effect of friendship quality as a moderator variable. Additional analyses examined SPS goal-strategy sequencing and grouped SPS goals into high cost and low cost categories. Results showed a trend supporting the hypothesis that children with TBI had fewer SPS successes, especially with high cost goals, compared to the other two groups. Findings were discussed highlighting the moderation results involving children with severe TBI.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les antimoniures sont des semi-conducteurs III-V prometteurs pour le développement de dispositifs optoélectroniques puisqu'ils ont une grande mobilité d'électrons, une large gamme spectrale d'émission ou de détection et offrent la possibilité de former des hétérostructures confinées dont la recombinaison est de type I, II ou III. Bien qu'il existe plusieurs publications sur la fabrication de dispositifs utilisant un alliage d'In(x)Ga(1-x)As(y)Sb(1-y) qui émet ou détecte à une certaine longueur d'onde, les détails, à savoir comment sont déterminés les compositions et surtout les alignements de bande, sont rarement explicites. Très peu d'études fondamentales sur l'incorporation d'indium et d'arsenic sous forme de tétramères lors de l'épitaxie par jets moléculaires existent, et les méthodes afin de déterminer l'alignement des bandes des binaires qui composent ces alliages donnent des résultats variables. Un modèle a été construit et a permis de prédire l'alignement des bandes énergétiques des alliages d'In(x)Ga(1-x)As(y)Sb(1-y) avec celles du GaSb pour l'ensemble des compositions possibles. Ce modèle tient compte des effets thermiques, des contraintes élastiques et peut aussi inclure le confinement pour des puits quantiques. De cette manière, il est possible de prédire la transition de type de recombinaison en fonction de la composition. Il est aussi montré que l'indium ségrègue en surface lors de la croissance par épitaxie par jets moléculaires d'In(x)Ga(1-x)Sb sur GaSb, ce qui avait déjà été observé pour ce type de matériau. Il est possible d'éliminer le gradient de composition à cette interface en mouillant la surface d'indium avant la croissance de l'alliage. L'épaisseur d'indium en surface dépend de la température et peut être évaluée par un modèle simple simulant la ségrégation. Dans le cas d'un puits quantique, il y aura une seconde interface GaSb sur In(x)Ga(1-x)Sb où l'indium de surface ira s'incorporer. La croissance de quelques monocouches de GaSb à basse température immédiatement après la croissance de l'alliage permet d'incorporer rapidement ces atomes d'indium et de garder la seconde interface abrupte. Lorsque la composition d'indium ne change plus dans la couche, cette composition correspond au rapport de flux d'atomes d'indium sur celui des éléments III. L'arsenic, dont la source fournit principalement des tétramères, ne s'incorpore pas de la même manière. Les tétramères occupent deux sites en surface et doivent interagir par paire afin de créer des dimères d'arsenic. Ces derniers pourront alors être incorporés dans l'alliage. Un modèle de cinétique de surface a été élaboré afin de rendre compte de la diminution d'incorporation d'arsenic en augmentant le rapport V/III pour une composition nominale d'arsenic fixe dans l'In(x)Ga(1-x)As(y)Sb(1-y). Ce résultat s'explique par le fait que les réactions de deuxième ordre dans la décomposition des tétramères d'arsenic ralentissent considérablement la réaction d'incorporation et permettent à l'antimoine d'occuper majoritairement la surface. Cette observation montre qu'il est préférable d'utiliser une source de dimères d'arsenic, plutôt que de tétramères, afin de mieux contrôler la composition d'arsenic dans la couche. Des puits quantiques d'In(x)Ga(1-x)As(y)Sb(1-y) sur GaSb ont été fabriqués et caractérisés optiquement afin d'observer le passage de recombinaison de type I à type II. Cependant, celui-ci n'a pas pu être observé puisque les spectres étaient dominés par un niveau énergétique dans le GaSb dont la source n'a pu être identifiée. Un problème dans la source de gallium pourrait être à l'origine de ce défaut et la résolution de ce problème est essentielle à la continuité de ces travaux.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Agronomia e Medicina Veterinária, Programa de Pós-Graduação em Agronegócios, 2016.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motion planning, or trajectory planning, commonly refers to a process of converting high-level task specifications into low-level control commands that can be executed on the system of interest. For different applications, the system will be different. It can be an autonomous vehicle, an Unmanned Aerial Vehicle(UAV), a humanoid robot, or an industrial robotic arm. As human machine interaction is essential in many of these systems, safety is fundamental and crucial. Many of the applications also involve performing a task in an optimal manner within a given time constraint. Therefore, in this thesis, we focus on two aspects of the motion planning problem. One is the verification and synthesis of the safe controls for autonomous ground and air vehicles in collision avoidance scenarios. The other part focuses on the high-level planning for the autonomous vehicles with the timed temporal constraints. In the first aspect of our work, we first propose a verification method to prove the safety and robustness of a path planner and the path following controls based on reachable sets. We demonstrate the method on quadrotor and automobile applications. Secondly, we propose a reachable set based collision avoidance algorithm for UAVs. Instead of the traditional approaches of collision avoidance between trajectories, we propose a collision avoidance scheme based on reachable sets and tubes. We then formulate the problem as a convex optimization problem seeking control set design for the aircraft to avoid collision. We apply our approach to collision avoidance scenarios of quadrotors and fixed-wing aircraft. In the second aspect of our work, we address the high level planning problems with timed temporal logic constraints. Firstly, we present an optimization based method for path planning of a mobile robot subject to timed temporal constraints, in a dynamic environment. Temporal logic (TL) can address very complex task specifications such as safety, coverage, motion sequencing etc. We use metric temporal logic (MTL) to encode the task specifications with timing constraints. We then translate the MTL formulae into mixed integer linear constraints and solve the associated optimization problem using a mixed integer linear program solver. We have applied our approach on several case studies in complex dynamical environments subjected to timed temporal specifications. Secondly, we also present a timed automaton based method for planning under the given timed temporal logic specifications. We use metric interval temporal logic (MITL), a member of the MTL family, to represent the task specification, and provide a constructive way to generate a timed automaton and methods to look for accepting runs on the automaton to find an optimal motion (or path) sequence for the robot to complete the task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The generation of identical droplets of controllable size in the micrometer range is a problem of much interest owing to the numerous technological applications of such droplets. This work reports an investigation of the regime of periodic emission of droplets from an electrified oscillating meniscus of a liquid of low viscosity and high electrical conductivity attached to the end of a capillary tube, which may be used to produce droplets more than ten times smaller than the diameter of the tube. To attain this periodic microdripping regime, termed axial spray mode II by Juraschek and Röllgen [R. Juraschek and F. W. Röllgen, Int. J. Mass Spectrom. 177, 1 (1998)], liquid is continuously supplied through the tube at a given constant flow rate, while a dc voltage is applied between the tube and a nearby counter electrode. The resulting electric field induces a stress at the surface of the liquid that stretches the meniscus until, in certain ranges of voltage and flow rate, it develops a ligament that eventually detaches, forming a single droplet, in a process that repeats itself periodically. While it is being stretched, the ligament develops a conical tip that emits ultrafine droplets, but the total mass emitted is practically contained in the main droplet. In the parametrical domain studied, we find that the process depends on two main dimensionless parameters, the flow rate nondimensionalized with the diameter of the tube and the capillary time, q, and the electric Bond number BE, which is a nondimensional measure of the square of the applied voltage. The meniscus oscillation frequency made nondimensional with the capillary time, f, is of order unity for very small flow rates and tends to decrease as the inverse of the square root of q for larger values of this parameter. The product of the meniscus mean volume times the oscillation frequency is nearly constant. The characteristic length and width of the liquid ligament immediately before its detachment approximately scale as powers of the flow rate and depend only weakly on the applied voltage. The diameter of the main droplets nondimensionalized with the diameter of the tube satisfies dd≈(6/π)1/3(q/f)1/3, from mass conservation, while the electric charge of these droplets is about 1/4 of the Rayleigh charge. At the minimum flow rate compatible with the periodic regimen, the dimensionless diameter of the droplets is smaller than one-tenth, which presents a way to use electrohydrodynamic atomization to generate droplets of highly conducting liquids in the micron-size range, in marked contrast with the cone-jet electrospray whose typical droplet size is in the nanometric regime for these liquids. In contrast with other microdripping regimes where the mass is emitted upon the periodic formation of a narrow capillary jet, the present regime gives one single droplet per oscillation, except for the almost massless fine aerosol emitted in the form of an electrospray.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a high-accuracy fully analytical formulation to compute the miss distance and collision probability of two approaching objects following an impulsive collision avoidance maneuver. The formulation hinges on a linear relation between the applied impulse and the objects? relative motion in the b-plane, which allows one to formulate the maneuver optimization problem as an eigenvalue problem coupled to a simple nonlinear algebraic equation. The optimization criterion consists of minimizing the maneuver cost in terms of delta-V magnitude to either maximize collision miss distance or to minimize Gaussian collision probability. The algorithm, whose accuracy is verified in representative mission scenarios, can be employed for collision avoidance maneuver planning with reduced computational cost when compared with fully numerical algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coprime and nested sampling are well known deterministic sampling techniques that operate at rates significantly lower than the Nyquist rate, and yet allow perfect reconstruction of the spectra of wide sense stationary signals. However, theoretical guarantees for these samplers assume ideal conditions such as synchronous sampling, and ability to perfectly compute statistical expectations. This thesis studies the performance of coprime and nested samplers in spatial and temporal domains, when these assumptions are violated. In spatial domain, the robustness of these samplers is studied by considering arrays with perturbed sensor locations (with unknown perturbations). Simplified expressions for the Fisher Information matrix for perturbed coprime and nested arrays are derived, which explicitly highlight the role of co-array. It is shown that even in presence of perturbations, it is possible to resolve $O(M^2)$ under appropriate conditions on the size of the grid. The assumption of small perturbations leads to a novel ``bi-affine" model in terms of source powers and perturbations. The redundancies in the co-array are then exploited to eliminate the nuisance perturbation variable, and reduce the bi-affine problem to a linear underdetermined (sparse) problem in source powers. This thesis also studies the robustness of coprime sampling to finite number of samples and sampling jitter, by analyzing their effects on the quality of the estimated autocorrelation sequence. A variety of bounds on the error introduced by such non ideal sampling schemes are computed by considering a statistical model for the perturbation. They indicate that coprime sampling leads to stable estimation of the autocorrelation sequence, in presence of small perturbations. Under appropriate assumptions on the distribution of WSS signals, sharp bounds on the estimation error are established which indicate that the error decays exponentially with the number of samples. The theoretical claims are supported by extensive numerical experiments.