960 resultados para Binary cuckoo searches
Resumo:
Introduction: A variety of subjective experiences have been reported to be associated with the symptom expression of obsessive-compulsive disorder (OCD) and Tourette syndrome (TS). First described in TS patients, these subjective experiences have been defined in different ways. There is no consensus in the literature on how to best define subjective experiences. This lack of consensus may hinder the understanding of study results and prevents the possibility of including them in the search for etiological factors associated with OCD and TS. Methods: The objective of this article was to review the descriptions of subjective experiences in the English-language literature from 1980-2007. This meta-analytic review was carried out using the English-language literature from 1980-2007 available on MEDLINE, PsycINFO, and the Cochrane Library databases using the following search terms: premonitory urges, sensory tics, ""just-right"" perceptions, sensory phenomena, sensory experiences, incompleteness, ""not just-right"" phenomena, obsessive-compulsive disorder and TS, including OCD and/or TS, in all combination searches. We also searched for the references cited in each article previously found that referred to the aforementioned terms. Thirty-one articles were included in the study. Results: Subjective experiences, in particular, the sensory phenomena, were important phenotypic variables in the characterization of the tic-related OCD subtype and were more frequent in the early-onset OCD subtype. There is a paucity of studies using structured interviews to assess sensory phenomena, their epidemiology and the etiological mechanisms associated with sensory phenomena. Conclusion: The current review provides some evidence that sensory phenomena can be useful to identify more homogenous subgroups of OCD and TS patients and should be included as important phenotypic variables in future clinical, genetic, neuroimaging, and treatment-response studies.
Resumo:
Background Mucosal leishmaniasis is caused mainly by Leishmania braziliensis and it occurs months or years after cutaneous lesions. This progressive disease destroys cartilages and osseous structures from face, pharynx and larynx. Objective and methods The aim of this study was to analyse the significance of clinical and epidemiological findings, diagnosis and treatment with the outcome and recurrence of mucosal leishmaniasis through binary logistic regression model from 140 patients with mucosal leishmaniasis from a Brazilian centre. Results The median age of patients was 57.5 and systemic arterial hypertension was the most prevalent secondary disease found in patients with mucosal leishmaniasis (43%). Diabetes, chronic nephropathy and viral hepatitis, allergy and coagulopathy were found in less than 10% of patients. Human immunodeficiency virus (HIV) infection was found in 7 of 140 patients (5%). Rhinorrhea (47%) and epistaxis (75%) were the most common symptoms. N-methyl-glucamine showed a cure rate of 91% and recurrence of 22%. Pentamidine showed a similar rate of cure (91%) and recurrence (25%). Fifteen patients received itraconazole with a cure rate of 73% and recurrence of 18%. Amphotericin B was the drug used in 30 patients with 82% of response with a recurrence rate of 7%. The binary logistic regression analysis demonstrated that systemic arterial hypertension and HIV infection were associated with failure of the treatment (P < 0.05). Conclusion The current first-line mucosal leishmaniasis therapy shows an adequate cure but later recurrence. HIV infection and systemic arterial hypertension should be investigated before start the treatment of mucosal leishmaniasis. Conflicts of interest The authors are not part of any associations or commercial relationships that might represent conflicts of interest in the writing of this study (e.g. pharmaceutical stock ownership, consultancy, advisory board membership, relevant patents, or research funding).
Resumo:
Background: Brazilian Quilombos are Afro-derived communities founded mainly by fugitive slaves between the 16(th) and 19(th) centuries; they can be recognized today by ancestral and cultural characteristics. Each of these remnant communities, however, has its own particular history, which includes the migration of non-African derived people. Methods: The present work presents a proposal for the origin of the male founder in Brazilian quilombos based on Y-haplogroup distribution. Y haplogroups, based on 16 binary markers (92R7, SRY2627, SRY4064, SRY10831.1 and .2, M2, M3, M09, M34, M60, M89, M213, M216, P2, P3 and YAP), were analysed for 98 DNA samples from genetically unrelated men from three rural Brazilian Afro-derived communities-Mocambo, Rio das Ras and Kalunga-in order to estimate male geographic origin. Results: Data indicated significant differences among these communities. A high frequency of non-African haplogroups was observed in all communities. Conclusions: This observation suggested an admixture process that has occurred over generations and directional mating between European males and African female slaves that must have occurred on farms before the slaves escaped. This means that the admixture occurred before the slaves escaped and the foundation of the quilombo.
Resumo:
In this article five women explore (female) embodiment in academic work in current workplaces. In a week-long collective biography workshop they produced written memories of themselves in their various workplaces and memories of themselves as children and as students. These memories then became the texts out of which the analysis was generated. The authors examine the constitutive and seductive effects of neoliberal discourses and practices, and in particular, the assembling of academic bodies as particular kinds of working bodies. They use the concept of chiasma, or crossing over, to trouble some aspects of binary thinking about bodies and about the relations between bodies and discourses. They examine the way that we simultaneously resist and appropriate, and are seduced by and appropriated within, neoliberal discourses and practices.
Resumo:
HE PROBIT MODEL IS A POPULAR DEVICE for explaining binary choice decisions in econometrics. It has been used to describe choices such as labor force participation, travel mode, home ownership, and type of education. These and many more examples can be found in papers by Amemiya (1981) and Maddala (1983). Given the contribution of economics towards explaining such choices, and given the nature of data that are collected, prior information on the relationship between a choice probability and several explanatory variables frequently exists. Bayesian inference is a convenient vehicle for including such prior information. Given the increasing popularity of Bayesian inference it is useful to ask whether inferences from a probit model are sensitive to a choice between Bayesian and sampling theory techniques. Of interest is the sensitivity of inference on coefficients, probabilities, and elasticities. We consider these issues in a model designed to explain choice between fixed and variable interest rate mortgages. Two Bayesian priors are employed: a uniform prior on the coefficients, designed to be noninformative for the coefficients, and an inequality restricted prior on the signs of the coefficients. We often know, a priori, whether increasing the value of a particular explanatory variable will have a positive or negative effect on a choice probability. This knowledge can be captured by using a prior probability density function (pdf) that is truncated to be positive or negative. Thus, three sets of results are compared:those from maximum likelihood (ML) estimation, those from Bayesian estimation with an unrestricted uniform prior on the coefficients, and those from Bayesian estimation with a uniform prior truncated to accommodate inequality restrictions on the coefficients.
Resumo:
Codes C-1,...,C-M of length it over F-q and an M x N matrix A over F-q define a matrix-product code C = [C-1 (...) C-M] (.) A consisting of all matrix products [c(1) (...) c(M)] (.) A. This generalizes the (u/u + v)-, (u + v + w/2u + v/u)-, (a + x/b + x/a + b + x)-, (u + v/u - v)- etc. constructions. We study matrix-product codes using Linear Algebra. This provides a basis for a unified analysis of /C/, d(C), the minimum Hamming distance of C, and C-perpendicular to. It also reveals an interesting connection with MDS codes. We determine /C/ when A is non-singular. To underbound d(C), we need A to be 'non-singular by columns (NSC)'. We investigate NSC matrices. We show that Generalized Reed-Muller codes are iterative NSC matrix-product codes, generalizing the construction of Reed-Muller codes, as are the ternary 'Main Sequence codes'. We obtain a simpler proof of the minimum Hamming distance of such families of codes. If A is square and NSC, C-perpendicular to can be described using C-1(perpendicular to),...,C-M(perpendicular to) and a transformation of A. This yields d(C-perpendicular to). Finally we show that an NSC matrix-product code is a generalized concatenated code.
Resumo:
Purpose: Hemiplegic shoulder pain can affect up to 70% of stroke patients and can have an adverse impact on rehabilitation outcomes. This article aims to review the literature on the suggested causes of hemiplegic shoulder pain and the therapeutic techniques that can be used to prevent or treat it. On the basis of this review, the components of an optimal management programme for hemiplegic shoulder pain are explored. Method: English language articles in the CINAHL and MEDLINE databases between 1990 and 2000 were reviewed. These were supplemented by citation tracking and manual searches. Results: A management programme for hemiplegic shoulder pain could comprise the following components: provision of an external support for the affected upper limb when the patient is seated, careful positioning in bed, daily static positional stretches, motor retraining and strapping of the scapula to maintain postural tone and symmetry. Conclusions: Research is required to evaluate the effectiveness of the components of the proposed management programme for the prevention and treatment of hemiplegic shoulder pain and to determine in what combination they achieve the best outcomes.
Resumo:
There is overwhelming evidence for the existence of substantial genetic influences on individual differences in general and specific cognitive abilities, especially in adults. The actual localization and identification of genes underlying variation in cognitive abilities and intelligence has only just started, however. Successes are currently limited to neurological mutations with rather severe cognitive effects. The current approaches to trace genes responsible for variation in the normal ranges of cognitive ability consist of large scale linkage and association studies. These are hampered by the usual problems of low statistical power to detect quantitative trait loci (QTLs) of small effect. One strategy to boost the power of genomic searches is to employ endophenotypes of cognition derived from the booming field of cognitive neuroscience This special issue of Behavior Genetics reports on one of the first genome-wide association studies for general IQ. A second paper summarizes candidate genes for cognition, based on animal studies. A series of papers then introduces two additional levels of analysis in the ldquoblack boxrdquo between genes and cognitive ability: (1) behavioral measures of information-processing speed (inspection time, reaction time, rapid naming) and working memory capacity (performance on on single or dual tasks of verbal and spatio-visual working memory), and (2) electrophyiosological derived measures of brain function (e.g., event-related potentials). The obvious way to assess the reliability and validity of these endophenotypes and their usefulness in the search for cognitive ability genes is through the examination of their genetic architecture in twin family studies. Papers in this special issue show that much of the association between intelligence and speed-of-information processing/brain function is due to a common gene or set of genes, and thereby demonstrate the usefulness of considering these measures in gene-hunting studies for IQ.
Resumo:
The vacancy solution theory of adsorption is re-formulated here through the mass-action law, and placed in a convenient framework permitting the development of thermodynamic ally consistent isotherms. It is shown that both the multisite Langmuir model and the classical vacancy solution theory expression are special cases of the more general approach when the Flory-Huggins activity coefficient model is used, with the former being the thermodynamically consistent result. The improved vacancy solution theory approach is further extended here to heterogeneous adsorbents by considering the pore-width dependent potential along with a pore size distribution. However, application of the model to numerous hydrocarbons as well as other adsorptives on microporous activated carbons shows that the multisite model has difficulty in the presence of a pore size distribution, because pores of different sizes can have different numbers of adsorbed layers and therefore different site occupancies. On the other hand, use of the classical vacancy solution theory expression for the local isotherm leads to good simultaneous fit of the data, while yielding a site diameter of about 0.257 nm, consistent with that expected for the potential well in aromatic rings on carbon pore surfaces. It is argued that the classical approach is successful because the Flory-Huggins term effectively represents adsorbate interactions in disguise. When used together with the ideal adsorbed solution theory the heterogeneous vacancy solution theory successfully predicts binary adsorption equilibria, and is found to perform better than the multisite Langmuir as well as the heterogeneous Langmuir model. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Five kinetic models for adsorption of hydrocarbons on activated carbon are compared and investigated in this study. These models assume different mass transfer mechanisms within the porous carbon particle. They are: (a) dual pore and surface diffusion (MSD), (b) macropore, surface, and micropore diffusion (MSMD), (c) macropore, surface and finite mass exchange (FK), (d) finite mass exchange (LK), and (e) macropore, micropore diffusion (BM) models. These models are discriminated using the single component kinetic data of ethane and propane as well as the multicomponent kinetics data of their binary mixtures measured on two commercial activated carbon samples (Ajax and Norit) under various conditions. The adsorption energetic heterogeneity is considered for all models to account for the system. It is found that, in general, the models assuming diffusion flux of adsorbed phase along the particle scale give better description of the kinetic data.
Resumo:
The material in genebanks includes valuable traditional varieties and landraces, non-domesticated species, advanced and obsolete cultivars, breeding lines and genetic stock. It is the wide variety of potentially useful genetic diversity that makes collections valuable. While most of the yield increases to date have resulted from manipulation of a few major traits (such as height, photoperiodism, and vernalization), meeting future demand for increased yields will require exploitation of novel genetic resources. Many traits have been reported to have potential to enhance yield, and high expression of these can be found in germplasm collections. To boost yield in irrigated situations, spike fertility must be improved simultaneously with photosynthetic capacity. CIMMYT's Wheat Genetic Resources program has identified a source of multi-ovary florets, with up to 6 kernels per floret. Lines from landrace collections have been identified that have very high chlorophyll concentration, which may increase leaf photosynthetic rate. High chlorophyll concentration and high stomatal conductance are associated with heat tolerance. Recent studies, through augmented use of seed multiplication nurseries, identified high expression of these traits in bank accessions, and both traits were heritable. Searches are underway for drought tolerance traits related to remobilization of stem fructans, awn photosynthesis, osmotic adjustment, and pubescence. Genetic diversity from wild relatives through the production of synthetic wheats has produced novel genetic diversity.
Resumo:
Motivation: A consensus sequence for a family of related sequences is, as the name suggests, a sequence that captures the features common to most members of the family. Consensus sequences are important in various DNA sequencing applications and are a convenient way to characterize a family of molecules. Results: This paper describes a new algorithm for finding a consensus sequence, using the popular optimization method known as simulated annealing. Unlike the conventional approach of finding a consensus sequence by first forming a multiple sequence alignment, this algorithm searches for a sequence that minimises the sum of pairwise distances to each of the input sequences. The resulting consensus sequence can then be used to induce a multiple sequence alignment. The time required by the algorithm scales linearly with the number of input sequences and quadratically with the length of the consensus sequence. We present results demonstrating the high quality of the consensus sequences and alignments produced by the new algorithm. For comparison, we also present similar results obtained using ClustalW. The new algorithm outperforms ClustalW in many cases.
Resumo:
Genetic research on risk of alcohol, tobacco or drug dependence must make allowance for the partial overlap of risk-factors for initiation of use, and risk-factors for dependence or other outcomes in users. Except in the extreme cases where genetic and environmental risk-factors for initiation and dependence overlap completely or are uncorrelated, there is no consensus about how best to estimate the magnitude of genetic or environmental correlations between Initiation and Dependence in twin and family data. We explore by computer simulation the biases to estimates of genetic and environmental parameters caused by model misspecification when Initiation can only be defined as a binary variable. For plausible simulated parameter values, the two-stage genetic models that we consider yield estimates of genetic and environmental variances for Dependence that, although biased, are not very discrepant from the true values. However, estimates of genetic (or environmental) correlations between Initiation and Dependence may be seriously biased, and may differ markedly under different two-stage models. Such estimates may have little credibility unless external data favor selection of one particular model. These problems can be avoided if Initiation can be assessed as a multiple-category variable (e.g. never versus early-onset versus later onset user), with at least two categories measurable in users at risk for dependence. Under these conditions, under certain distributional assumptions., recovery of simulated genetic and environmental correlations becomes possible, Illustrative application of the model to Australian twin data on smoking confirmed substantial heritability of smoking persistence (42%) with minimal overlap with genetic influences on initiation.
Resumo:
Koala (Phascolarctos cinereus) populations in eastern Australia are threatened by land clearing for agricultural and urban development. At the same time, conservation efforts are hindered by a dearth of information about inland populations. Faecal deposits offer a source of information that is readily available and easily collected non-invasively. We detail a faecal pellet sampling protocol that was developed for use in a large rangeland biogeographic region. The method samples trees in belt transects, uses a thorough search at the tree base to quickly identify trees with koala pellets under them, then estimates the abundance of faecal pellets under those trees using 1-m(2) quadrats. There was a strong linear relationship between these estimates and a complete enumeration of pellet abundance under the same trees. We evaluated the accuracy of our method in detecting trees where pellets were present by means of a misclassification index that was weighed more heavily for missed trees that had high numbers of pellets under them. This showed acceptable accuracy in all landforms except riverine, where some trees with large numbers of pellets were missed. Here, accuracy in detecting pellet presence was improved by sampling with quadrats, rather than basal searches. Finally, we developed a method to reliably age pellets and demonstrate how this protocol could be used with the faecal-standing-crop method to derive a regional estimate of absolute koala abundance.
Resumo:
Read-only-memory-based (ROM-based) quantum computation (QC) is an alternative to oracle-based QC. It has the advantages of being less magical, and being more suited to implementing space-efficient computation (i.e., computation using the minimum number of writable qubits). Here we consider a number of small (one- and two-qubit) quantum algorithms illustrating different aspects of ROM-based QC. They are: (a) a one-qubit algorithm to solve the Deutsch problem; (b) a one-qubit binary multiplication algorithm; (c) a two-qubit controlled binary multiplication algorithm; and (d) a two-qubit ROM-based version of the Deutsch-Jozsa algorithm. For each algorithm we present experimental verification using nuclear magnetic resonance ensemble QC. The average fidelities for the implementation were in the ranges 0.9-0.97 for the one-qubit algorithms, and 0.84-0.94 for the two-qubit algorithms. We conclude with a discussion of future prospects for ROM-based quantum computation. We propose a four-qubit algorithm, using Grover's iterate, for solving a miniature real-world problem relating to the lengths of paths in a network.