960 resultados para Maximum-entropy selection criterion
Resumo:
Pós-graduação em Ciências Biológicas (Zoologia) - IBRC
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The objective of this paper is to present a benefit-cost ranking of 127 civil transport aircraft; this ranking was determined considering a new data envelopment analysis (DEA) approach, called triple index, which combines three assessment methods: 1) standard frontier, 2) inverted index; 3) cross-multiplicative index. The analysis used the following inputs: a) market price; b) direct operating costs; and as outputs: a) payload, b) cruise speed; c) maximum rate of climb with a single engine. To ensure the homogeneity of the units, the aircrafts were divided according to the propulsion system (jet and turboprop) and size (regional, narrow-body and wide-body); they were also evaluated according to different ranges in order to identify the aircraft with the best cost-benefit relationship for each option.
Resumo:
The starting point of this article is the question "How to retrieve fingerprints of rhythm in written texts?" We address this problem in the case of Brazilian and European Portuguese. These two dialects of Modern Portuguese share the same lexicon and most of the sentences they produce are superficially identical. Yet they are conjectured, on linguistic grounds, to implement different rhythms. We show that this linguistic question can be formulated as a problem of model selection in the class of variable length Markov chains. To carry on this approach, we compare texts from European and Brazilian Portuguese. These texts are previously encoded according to some basic rhythmic features of the sentences which can be automatically retrieved. This is an entirely new approach from the linguistic point of view. Our statistical contribution is the introduction of the smallest maximizer criterion which is a constant free procedure for model selection. As a by-product, this provides a solution for the problem of optimal choice of the penalty constant when using the BIC to select a variable length Markov chain. Besides proving the consistency of the smallest maximizer criterion when the sample size diverges, we also make a simulation study comparing our approach with both the standard BIC selection and the Peres-Shields order estimation. Applied to the linguistic sample constituted for our case study, the smallest maximizer criterion assigns different context-tree models to the two dialects of Portuguese. The features of the selected models are compatible with current conjectures discussed in the linguistic literature.
Resumo:
Complexity in time series is an intriguing feature of living dynamical systems, with potential use for identification of system state. Although various methods have been proposed for measuring physiologic complexity, uncorrelated time series are often assigned high values of complexity, errouneously classifying them as a complex physiological signals. Here, we propose and discuss a method for complex system analysis based on generalized statistical formalism and surrogate time series. Sample entropy (SampEn) was rewritten inspired in Tsallis generalized entropy, as function of q parameter (qSampEn). qSDiff curves were calculated, which consist of differences between original and surrogate series qSampEn. We evaluated qSDiff for 125 real heart rate variability (HRV) dynamics, divided into groups of 70 healthy, 44 congestive heart failure (CHF), and 11 atrial fibrillation (AF) subjects, and for simulated series of stochastic and chaotic process. The evaluations showed that, for nonperiodic signals, qSDiff curves have a maximum point (qSDiff(max)) for q not equal 1. Values of q where the maximum point occurs and where qSDiff is zero were also evaluated. Only qSDiff(max) values were capable of distinguish HRV groups (p-values 5.10 x 10(-3); 1.11 x 10(-7), and 5.50 x 10(-7) for healthy vs. CHF, healthy vs. AF, and CHF vs. AF, respectively), consistently with the concept of physiologic complexity, and suggests a potential use for chaotic system analysis. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4758815]
Resumo:
Most biological systems are formed by component parts that are to some degree interrelated. Groups of parts that are more associated among themselves and are relatively autonomous from others are called modules. One of the consequences of modularity is that biological systems usually present an unequal distribution of the genetic variation among traits. Estimating the covariance matrix that describes these systems is a difficult problem due to a number of factors such as poor sample sizes and measurement errors. We show that this problem will be exacerbated whenever matrix inversion is required, as in directional selection reconstruction analysis. We explore the consequences of varying degrees of modularity and signal-to-noise ratio on selection reconstruction. We then present and test the efficiency of available methods for controlling noise in matrix estimates. In our simulations, controlling matrices for noise vastly improves the reconstruction of selection gradients. We also perform an analysis of selection gradients reconstruction over a New World Monkeys skull database to illustrate the impact of noise on such analyses. Noise-controlled estimates render far more plausible interpretations that are in full agreement with previous results.
Resumo:
A combination of an extension of the topological instability "λ criterion" and the "average electronegativity" has been recently reported in the literature to predict compositions with high glass-forming ability (GFA). In the present work, both criteria have been applied to select the Ni61.0Nb36.0B3 alloy with a high glass-forming ability. Ingots were prepared by arc-melting and were used to produce ribbons processed by the melt-spinning technique further characterized by differential scanning calorimetry (DSC), X-ray diffraction (XRD) and scanning electron microscopy (SEM). The Ni61.0Nb36.0B3 alloy revealed a complete amorphization and supercooled liquid region ΔTx = 68 K. In addition, wedge-shaped samples were prepared using copper mold casting in order to determine the critical thickness for amorphous formation. Scanning electron microscopy (SEM) revealed that fully amorphous samples could be obtained, reaching up to ~800 µm in thickness.
Resumo:
OBJECTIVE: Define and compare numbers and types of occlusal contacts in maximum intercuspation. METHODS: The study consisted of clinical and photographic analysis of occlusal contacts in maximum intercuspation. Twenty-six Caucasian Brazilian subjects were selected before orthodontic treatment, 20 males and 6 females, with ages ranging between 12 and 18 years. The subjects were diagnosed and grouped as follows: 13 with Angle Class I malocclusion and 13 with Angle Class II Division 1 malocclusion. After analysis, the occlusal contacts were classified according to the established criteria as: tripodism, bipodism, monopodism (respectively, three, two or one contact point with the slope of the fossa); cuspid to a marginal ridge; cuspid to two marginal ridges; cuspid tip to opposite inclined plane; surface to surface; and edge to edge. RESULTS: The mean number of occlusal contacts per subject in Class I malocclusion was 43.38 and for Class II Division 1 malocclusion it was 44.38, this difference was not statistically significant (p>0.05). CONCLUSIONS: There is a variety of factors that influence the number of occlusal contacts between a Class I and a Class II, Division 1 malocclusion. There is no standardization of occlusal contact type according to the studied malocclusions. A proper selection of occlusal contact types such as cuspid to fossa or cuspid to marginal ridge and its location in the teeth should be individually defined according to the demands of each case. The existence of an adequate occlusal contact leads to a correct distribution of forces, promoting periodontal health.
Resumo:
Most studies on selection in plants estimate female fitness components and neglect male mating success, although the latter might also be fundamental to understand adaptive evolution. Information from molecular genetic markers can be used to assess determinants of male mating success through parentage analyses. We estimated paternal selection gradients on floral traits in a large natural population of the herb Mimulus guttatus using a paternity probability model and maximum likelihood methods. This analysis revealed more significant selection gradients than a previous analysis based on regression of estimated male fertilities on floral traits. There were differences between results of univariate and multivariate analyses most likely due to the underlying covariance structure of the traits. Multivariate analysis, which corrects for the covariance structure of the traits, indicated that male mating success declined with distance from and depended on the direction to the mother plants. Moreover, there was directional selection for plants with fewer open flowers which have smaller corollas, a smaller anther-stigma separation, more red dots on the corolla and a larger fluctuating asymmetry therein. For most of these traits, however, there was also stabilizing selection indicating that there are intermediate optima for these traits. The large number of significant selection gradients in this study shows that even in relatively large natural populations where not all males can be sampled, it is possible to detect significant paternal selection gradients, and that such studies can give us valuable information required to better understand adaptive plant evolution.
Resumo:
The objective of this retrospective study was to assess image quality with pulmonary CT angiography (CTA) using 80 kVp and to find anthropomorphic parameters other than body weight (BW) to serve as selection criteria for low-dose CTA. Attenuation in the pulmonary arteries, anteroposterior and lateral diameters, cross-sectional area and soft-tissue thickness of the chest were measured in 100 consecutive patients weighing less than 100 kg with 80 kVp pulmonary CTA. Body surface area (BSA) and contrast-to-noise ratios (CNR) were calculated. Three radiologists analyzed arterial enhancement, noise, and image quality. Image parameters between patients grouped by BW (group 1: 0-50 kg; groups 2-6: 51-100 kg, decadally increasing) were compared. CNR was higher in patients weighing less than 60 kg than in the BW groups 71-99 kg (P between 0.025 and <0.001). Subjective ranking of enhancement (P = 0.165-0.605), noise (P = 0.063), and image quality (P = 0.079) did not differ significantly across all patient groups. CNR correlated moderately strongly with weight (R = -0.585), BSA (R = -0.582), cross-sectional area (R = -0.544), and anteroposterior diameter of the chest (R = -0.457; P < 0.001 all parameters). We conclude that 80 kVp pulmonary CTA permits diagnostic image quality in patients weighing up to 100 kg. Body weight is a suitable criterion to select patients for low-dose pulmonary CTA.
Resumo:
The motion of lung tumors during respiration makes the accurate delivery of radiation therapy to the thorax difficult because it increases the uncertainty of target position. The adoption of four-dimensional computed tomography (4D-CT) has allowed us to determine how a tumor moves with respiration for each individual patient. Using information acquired during a 4D-CT scan, we can define the target, visualize motion, and calculate dose during the planning phase of the radiotherapy process. One image data set that can be created from the 4D-CT acquisition is the maximum-intensity projection (MIP). The MIP can be used as a starting point to define the volume that encompasses the motion envelope of the moving gross target volume (GTV). Because of the close relationship that exists between the MIP and the final target volume, we investigated four MIP data sets created with different methodologies (3 using various 4D-CT sorting implementations, and one using all available cine CT images) to compare target delineation. It has been observed that changing the 4D-CT sorting method will lead to the selection of a different collection of images; however, the clinical implications of changing the constituent images on the resultant MIP data set are not clear. There has not been a comprehensive study that compares target delineation based on different 4D-CT sorting methodologies in a patient population. We selected a collection of patients who had previously undergone thoracic 4D-CT scans at our institution, and who had lung tumors that moved at least 1 cm. We then generated the four MIP data sets and automatically contoured the target volumes. In doing so, we identified cases in which the MIP generated from a 4D-CT sorting process under-represented the motion envelope of the target volume by more than 10% than when measured on the MIP generated from all of the cine CT images. The 4D-CT methods suffered from duplicate image selection and might not choose maximum extent images. Based on our results, we suggest utilization of a MIP generated from the full cine CT data set to ensure a representative inclusive tumor extent, and to avoid geometric miss.
Resumo:
Treatment for cancer often involves combination therapies used both in medical practice and clinical trials. Korn and Simon listed three reasons for the utility of combinations: 1) biochemical synergism, 2) differential susceptibility of tumor cells to different agents, and 3) higher achievable dose intensity by exploiting non-overlapping toxicities to the host. Even if the toxicity profile of each agent of a given combination is known, the toxicity profile of the agents used in combination must be established. Thus, caution is required when designing and evaluating trials with combination therapies. Traditional clinical design is based on the consideration of a single drug. However, a trial of drugs in combination requires a dose-selection procedure that is vastly different than that needed for a single-drug trial. When two drugs are combined in a phase I trial, an important trial objective is to determine the maximum tolerated dose (MTD). The MTD is defined as the dose level below the dose at which two of six patients experience drug-related dose-limiting toxicity (DLT). In phase I trials that combine two agents, more than one MTD generally exists, although all are rarely determined. For example, there may be an MTD that includes high doses of drug A with lower doses of drug B, another one for high doses of drug B with lower doses of drug A, and yet another for intermediate doses of both drugs administered together. With classic phase I trial designs, only one MTD is identified. Our new trial design allows identification of more than one MTD efficiently, within the context of a single protocol. The two drugs combined in our phase I trial are temsirolimus and bevacizumab. Bevacizumab is a monoclonal antibody targeting the vascular endothelial growth factor (VEGF) pathway which is fundamental for tumor growth and metastasis. One mechanism of tumor resistance to antiangiogenic therapy is upregulation of hypoxia inducible factor 1α (HIF-1α) which mediates responses to hypoxic conditions. Temsirolimus has resulted in reduced levels of HIF-1α making this an ideal combination therapy. Dr. Donald Berry developed a trial design schema for evaluating low, intermediate and high dose levels of two drugs given in combination as illustrated in a recently published paper in Biometrics entitled “A Parallel Phase I/II Clinical Trial Design for Combination Therapies.” His trial design utilized cytotoxic chemotherapy. We adapted this design schema by incorporating greater numbers of dose levels for each drug. Additional dose levels are being examined because it has been the experience of phase I trials that targeted agents, when given in combination, are often effective at dosing levels lower than the FDA-approved dose of said drugs. A total of thirteen dose levels including representative high, intermediate and low dose levels of temsirolimus with representative high, intermediate, and low dose levels of bevacizumab will be evaluated. We hypothesize that our new trial design will facilitate identification of more than one MTD, if they exist, efficiently and within the context of a single protocol. Doses gleaned from this approach could potentially allow for a more personalized approach in dose selection from among the MTDs obtained that can be based upon a patient’s specific co-morbid conditions or anticipated toxicities.
Resumo:
The maintenance of genetic variation in a spatially heterogeneous environment has been one of the main research themes in theoretical population genetics. Despite considerable progress in understanding the consequences of spatially structured environments on genetic variation, many problems remain unsolved. One of them concerns the relationship between the number of demes, the degree of dominance, and the maximum number of alleles that can be maintained by selection in a subdivided population. In this work, we study the potential of maintaining genetic variation in a two-deme model with deme-independent degree of intermediate dominance, which includes absence of G x E interaction as a special case. We present a thorough numerical analysis of a two-deme three-allele model, which allows us to identify dominance and selection patterns that harbor the potential for stable triallelic equilibria. The information gained by this approach is then used to construct an example in which existence and asymptotic stability of a fully polymorphic equilibrium can be proved analytically. Noteworthy, in this example the parameter range in which three alleles can coexist is maximized for intermediate migration rates. Our results can be interpreted in a specialist-generalist context and (among others) show when two specialists can coexist with a generalist in two demes if the degree of dominance is deme independent and intermediate. The dominance relation between the generalist allele and the specialist alleles play a decisive role. We also discuss linear selection on a quantitative trait and show that G x E interaction is not necessary for the maintenance of more than two alleles in two demes.
Resumo:
This work deals with parallel optimization of expensive objective functions which are modelled as sample realizations of Gaussian processes. The study is formalized as a Bayesian optimization problem, or continuous multi-armed bandit problem, where a batch of q > 0 arms is pulled in parallel at each iteration. Several algorithms have been developed for choosing batches by trading off exploitation and exploration. As of today, the maximum Expected Improvement (EI) and Upper Confidence Bound (UCB) selection rules appear as the most prominent approaches for batch selection. Here, we build upon recent work on the multipoint Expected Improvement criterion, for which an analytic expansion relying on Tallis’ formula was recently established. The computational burden of this selection rule being still an issue in application, we derive a closed-form expression for the gradient of the multipoint Expected Improvement, which aims at facilitating its maximization using gradient-based ascent algorithms. Substantial computational savings are shown in application. In addition, our algorithms are tested numerically and compared to state-of-the-art UCB-based batchsequential algorithms. Combining starting designs relying on UCB with gradient-based EI local optimization finally appears as a sound option for batch design in distributed Gaussian Process optimization.
Resumo:
With hundreds of single nucleotide polymorphisms (SNPs) in a candidate gene and millions of SNPs across the genome, selecting an informative subset of SNPs to maximize the ability to detect genotype-phenotype association is of great interest and importance. In addition, with a large number of SNPs, analytic methods are needed that allow investigators to control the false positive rate resulting from large numbers of SNP genotype-phenotype analyses. This dissertation uses simulated data to explore methods for selecting SNPs for genotype-phenotype association studies. I examined the pattern of linkage disequilibrium (LD) across a candidate gene region and used this pattern to aid in localizing a disease-influencing mutation. The results indicate that the r2 measure of linkage disequilibrium is preferred over the common D′ measure for use in genotype-phenotype association studies. Using step-wise linear regression, the best predictor of the quantitative trait was not usually the single functional mutation. Rather it was a SNP that was in high linkage disequilibrium with the functional mutation. Next, I compared three strategies for selecting SNPs for application to phenotype association studies: based on measures of linkage disequilibrium, based on a measure of haplotype diversity, and random selection. The results demonstrate that SNPs selected based on maximum haplotype diversity are more informative and yield higher power than randomly selected SNPs or SNPs selected based on low pair-wise LD. The data also indicate that for genes with small contribution to the phenotype, it is more prudent for investigators to increase their sample size than to continuously increase the number of SNPs in order to improve statistical power. When typing large numbers of SNPs, researchers are faced with the challenge of utilizing an appropriate statistical method that controls the type I error rate while maintaining adequate power. We show that an empirical genotype based multi-locus global test that uses permutation testing to investigate the null distribution of the maximum test statistic maintains a desired overall type I error rate while not overly sacrificing statistical power. The results also show that when the penetrance model is simple the multi-locus global test does as well or better than the haplotype analysis. However, for more complex models, haplotype analyses offer advantages. The results of this dissertation will be of utility to human geneticists designing large-scale multi-locus genotype-phenotype association studies. ^