933 resultados para Low Autocorrelation Binary Sequence Problem
Resumo:
BACKGROUND AND PURPOSE: Previous studies in the United States and the United Kingdom have shown that stroke research is underfunded compared with coronary heart disease (CHD) and cancer research despite the high clinical and financial burden of stroke. We aimed to determine whether underfunding of stroke research is a Europe-wide problem. METHODS: Data for the financial year 2000 to 2001 were collected from 9 different European countries. Information on stroke, CHD, and cancer research funding awarded by disease-specific charities and nondisease-specific charity or government- funded organizations was obtained from annual reports, web sites, and by direct communication with organizations. RESULTS: There was marked and consistent underfunding of stroke research in all the countries studied. Stroke funding as a percentage of the total funding for stroke, CHD, and cancer was uniformly low, ranging from 2% to 11%. Funding for stroke was less than funding for cancer, usually by a factor of > or =10. In every country except Turkey, funding for stroke research was less than that for CHD. CONCLUSIONS: This study confirms that stroke research is grossly underfunded, compared with CHD and cancer, throughout Europe. Similar data have been obtained from the United States suggesting that relative underfunding of stroke research is likely to be a worldwide phenomenon.
Resumo:
A patent processus vaginalis peritonei (PPV) presents typically as an indirect hernia with an intact inguinal canal floor during childhood. Little is known however about PPV in adults and its best treatment. A cohort study included all consecutive patients admitted for ambulatory open hernia repair. In patients with a PPV, demographics, hernia characteristics, and outcome were prospectively assessed. Annulorrhaphy was the treatment of choice in patients with an internal inguinal ring diameter of < 30 mm. Between 1998 and 2006, 92 PPVs (two bilateral) were diagnosed in 676 open hernia repairs (incidence of 14%). Eighty nine of the 90 patients were males, the median age was 34 years (range: 17-85). A PPV was right-sided in 67% and partially obliterated in 66%. Forty-one patients had an annulorrhaphy and 51 patients had a tension-free mesh repair. The median operation time was significantly shorter in the annulorrhaphy group (38 vs. 48 min, P <.0001). In a median follow-up period of 56 months (27-128), both groups did not differ concerning recurrence (1/41 vs. 2/51), chronic pain (3/41 vs. 4/51), and hypoesthesia (5/41 vs. 9/51). There was however a clear trend to less neuropathic symptoms in favor of annulorrhaphy (0/41 vs. 5/51, P < 0.066). PPV occurs in 14% of adults undergoing hernia repair. In selected patients, annulorrhaphy takes less time and is associated with equally low recurrence but less potential for neuropathic symptoms.
Resumo:
Lymph node cells derived from A.TH or A.TL mice primed with beef cytochrome c show striking patterns of reactivity when assayed in vitro for antigen-induced T cell proliferation. Whereas cells from A.TH mice respond specifically to beef cytochrome c or peptides composed of amino acids 1-65 and 81-104, cells from A.TL mice respond neither to beef cytochrome c nor to peptide 1-65, but proliferate following exposure to either peptide 81-104 or to a cytochrome c hybrid molecule in which the N-terminal peptide of beef (1-65) was substituted by a similar peptide obtained from rabbit cytochrome c. Thus, T cells from mice phenotypically unresponsive to beef cytochrome may, in fact, contain populations of lymphocytes capable of responding to a unique peptide, the response to which is totally inhibited when the same fragment is presented in the sequence of the intact protein.
Resumo:
In a weighted spatial network, as specified by an exchange matrix, the variances of the spatial values are inversely proportional to the size of the regions. Spatial values are no more exchangeable under independence, thus weakening the rationale for ordinary permutation and bootstrap tests of spatial autocorrelation. We propose an alternative permutation test for spatial autocorrelation, based upon exchangeable spatial modes, constructed as linear orthogonal combinations of spatial values. The coefficients obtain as eigenvectors of the standardised exchange matrix appearing in spectral clustering, and generalise to the weighted case the concept of spatial filtering for connectivity matrices. Also, two proposals aimed at transforming an acessibility matrix into a exchange matrix with with a priori fixed margins are presented. Two examples (inter-regional migratory flows and binary adjacency networks) illustrate the formalism, rooted in the theory of spectral decomposition for reversible Markov chains.
Resumo:
1. Identifying the boundary of a species' niche from observational and environmental data is a common problem in ecology and conservation biology and a variety of techniques have been developed or applied to model niches and predict distributions. Here, we examine the performance of some pattern-recognition methods as ecological niche models (ENMs). Particularly, one-class pattern recognition is a flexible and seldom used methodology for modelling ecological niches and distributions from presence-only data. The development of one-class methods that perform comparably to two-class methods (for presence/absence data) would remove modelling decisions about sampling pseudo-absences or background data points when absence points are unavailable. 2. We studied nine methods for one-class classification and seven methods for two-class classification (five common to both), all primarily used in pattern recognition and therefore not common in species distribution and ecological niche modelling, across a set of 106 mountain plant species for which presence-absence data was available. We assessed accuracy using standard metrics and compared trade-offs in omission and commission errors between classification groups as well as effects of prevalence and spatial autocorrelation on accuracy. 3. One-class models fit to presence-only data were comparable to two-class models fit to presence-absence data when performance was evaluated with a measure weighting omission and commission errors equally. One-class models were superior for reducing omission errors (i.e. yielding higher sensitivity), and two-classes models were superior for reducing commission errors (i.e. yielding higher specificity). For these methods, spatial autocorrelation was only influential when prevalence was low. 4. These results differ from previous efforts to evaluate alternative modelling approaches to build ENM and are particularly noteworthy because data are from exhaustively sampled populations minimizing false absence records. Accurate, transferable models of species' ecological niches and distributions are needed to advance ecological research and are crucial for effective environmental planning and conservation; the pattern-recognition approaches studied here show good potential for future modelling studies. This study also provides an introduction to promising methods for ecological modelling inherited from the pattern-recognition discipline.
Resumo:
Ordering in a binary alloy is studied by means of a molecular-dynamics (MD) algorithm which allows to reach the domain growth regime. Results are compared with Monte Carlo simulations using a realistic vacancy-atom (MC-VA) mechanism. At low temperatures fast growth with a dynamical exponent x>1/2 is found for MD and MC-VA. The study of a nonequilibrium ordering process with the two methods shows the importance of the nonhomogeneity of the excitations in the system for determining its macroscopic kinetics.
Resumo:
In an attempt to solve the bridge problem faced by many county engineers, this investigation focused on a low cost bridge alternative that consists of using railroad flatcars (RRFC) as the bridge superstructure. The intent of this study was to determine whether these types of bridges are structurally adequate and potentially feasible for use on low volume roads. A questionnaire was sent to the Bridge Committee members of the American Association of State Highway and Transportation Officials (AASHTO) to determine their use of RRFC bridges and to assess the pros and cons of these bridges based on others’ experiences. It was found that these types of bridges are widely used in many states with large rural populations and they are reported to be a viable bridge alternative due to their low cost, quick and easy installation, and low maintenance. A main focus of this investigation was to study an existing RRFC bridge that is located in Tama County, IA. This bridge was analyzed using computer modeling and field load testing. The dimensions of the major structural members of the flatcars in this bridge were measured and their properties calculated and used in an analytical grillage model. The analytical results were compared with those obtained in the field tests, which involved instrumenting the bridge and loading it with a fully loaded rear tandem-axle truck. Both sets of data (experimental and theoretical) show that the Tama County Bridge (TCB) experienced very low strains and deflections when loaded and the RRFCs appeared to be structurally adequate to serve as a bridge superstructure. A calculated load rating of the TCB agrees with this conclusion. Because many different types of flatcars exist, other flatcars were modeled and analyzed. It was very difficult to obtain the structural plans of RRFCs; thus, only two additional flatcars were analyzed. The results of these analyses also yielded very low strains and displacements. Taking into account the experiences of other states, the inspection of several RRFC bridges in Oklahoma, the field test and computer analysis of the TCB, and the computer analysis of two additional flatcars, RRFC bridges appear to provide a safe and feasible bridge alternative for low volume roads.
Resumo:
We have performed a detailed study of the zenith angle dependence of the regeneration factor and distributions of events at SNO and SK for different solutions of the solar neutrino problem. In particular, we discuss the oscillatory behavior and the synchronization effect in the distribution for the LMA solution, the parametric peak for the LOW solution, etc. A physical interpretation of the effects is given. We suggest a new binning of events which emphasizes the distinctive features of the zenith angle distributions for the different solutions. We also find the correlations between the integrated day-night asymmetry and the rates of events in different zenith angle bins. The study of these correlations strengthens the identification power of the analysis.
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
We study the problem of the partition of a system of initial size V into a sequence of fragments s1,s2,s3 . . . . By assuming a scaling hypothesis for the probability p(s;V) of obtaining a fragment of a given size, we deduce that the final distribution of fragment sizes exhibits power-law behavior. This minimal model is useful to understanding the distribution of avalanche sizes in first-order phase transitions at low temperatures.
Resumo:
At high magnetic field strengths (≥ 3T), the radiofrequency wavelength used in MRI is of the same order of magnitude of (or smaller than) the typical sample size, making transmit magnetic field (B1+) inhomogeneities more prominent. Methods such as radiofrequency-shimming and transmit SENSE have been proposed to mitigate these undesirable effects. A prerequisite for such approaches is an accurate and rapid characterization of the B1+ field in the organ of interest. In this work, a new phase-sensitive three-dimensional B1+-mapping technique is introduced that allows the acquisition of a 64 × 64 × 8 B1+-map in ≈ 20 s, yielding an accurate mapping of the relative B1+ with a 10-fold dynamic range (0.2-2 times the nominal B1+). Moreover, the predominant use of low flip angle excitations in the presented sequence minimizes specific absorption rate, which is an important asset for in vivo B1+-shimming procedures at high magnetic fields. The proposed methodology was validated in phantom experiments and demonstrated good results in phantom and human B1+-shimming using an 8-channel transmit-receive array.
Resumo:
A common way to model multiclass classification problems is by means of Error-Correcting Output Codes (ECOCs). Given a multiclass problem, the ECOC technique designs a code word for each class, where each position of the code identifies the membership of the class for a given binary problem. A classification decision is obtained by assigning the label of the class with the closest code. One of the main requirements of the ECOC design is that the base classifier is capable of splitting each subgroup of classes from each binary problem. However, we cannot guarantee that a linear classifier model convex regions. Furthermore, nonlinear classifiers also fail to manage some type of surfaces. In this paper, we present a novel strategy to model multiclass classification problems using subclass information in the ECOC framework. Complex problems are solved by splitting the original set of classes into subclasses and embedding the binary problems in a problem-dependent ECOC design. Experimental results show that the proposed splitting procedure yields a better performance when the class overlap or the distribution of the training objects conceal the decision boundaries for the base classifier. The results are even more significant when one has a sufficiently large training size.
Resumo:
The ability to determine the location and relative strength of all transcription-factor binding sites in a genome is important both for a comprehensive understanding of gene regulation and for effective promoter engineering in biotechnological applications. Here we present a bioinformatically driven experimental method to accurately define the DNA-binding sequence specificity of transcription factors. A generalized profile was used as a predictive quantitative model for binding sites, and its parameters were estimated from in vitro-selected ligands using standard hidden Markov model training algorithms. Computer simulations showed that several thousand low- to medium-affinity sequences are required to generate a profile of desired accuracy. To produce data on this scale, we applied high-throughput genomics methods to the biochemical problem addressed here. A method combining systematic evolution of ligands by exponential enrichment (SELEX) and serial analysis of gene expression (SAGE) protocols was coupled to an automated quality-controlled sequence extraction procedure based on Phred quality scores. This allowed the sequencing of a database of more than 10,000 potential DNA ligands for the CTF/NFI transcription factor. The resulting binding-site model defines the sequence specificity of this protein with a high degree of accuracy not achieved earlier and thereby makes it possible to identify previously unknown regulatory sequences in genomic DNA. A covariance analysis of the selected sites revealed non-independent base preferences at different nucleotide positions, providing insight into the binding mechanism.
Resumo:
One major methodological problem in analysis of sequence data is the determination of costs from which distances between sequences are derived. Although this problem is currently not optimally dealt with in the social sciences, it has some similarity with problems that have been solved in bioinformatics for three decades. In this article, the authors propose an optimization of substitution and deletion/insertion costs based on computational methods. The authors provide an empirical way of determining costs for cases, frequent in the social sciences, in which theory does not clearly promote one cost scheme over another. Using three distinct data sets, the authors tested the distances and cluster solutions produced by the new cost scheme in comparison with solutions based on cost schemes associated with other research strategies. The proposed method performs well compared with other cost-setting strategies, while it alleviates the justification problem of cost schemes.
Resumo:
The method of instrumental variable (referred to as Mendelian randomization when the instrument is a genetic variant) has been initially developed to infer on a causal effect of a risk factor on some outcome of interest in a linear model. Adapting this method to nonlinear models, however, is known to be problematic. In this paper, we consider the simple case when the genetic instrument, the risk factor, and the outcome are all binary. We compare via simulations the usual two-stages estimate of a causal odds-ratio and its adjusted version with a recently proposed estimate in the context of a clinical trial with noncompliance. In contrast to the former two, we confirm that the latter is (under some conditions) a valid estimate of a causal odds-ratio defined in the subpopulation of compliers, and we propose its use in the context of Mendelian randomization. By analogy with a clinical trial with noncompliance, compliers are those individuals for whom the presence/absence of the risk factor X is determined by the presence/absence of the genetic variant Z (i.e., for whom we would observe X = Z whatever the alleles randomly received at conception). We also recall and illustrate the huge variability of instrumental variable estimates when the instrument is weak (i.e., with a low percentage of compliers, as is typically the case with genetic instruments for which this proportion is frequently smaller than 10%) where the inter-quartile range of our simulated estimates was up to 18 times higher compared to a conventional (e.g., intention-to-treat) approach. We thus conclude that the need to find stronger instruments is probably as important as the need to develop a methodology allowing to consistently estimate a causal odds-ratio.