947 resultados para stochastic search variable selection
Resumo:
Background The evolutionary advantages of selective attention are unclear. Since the study of selective attention began, it has been suggested that the nervous system only processes the most relevant stimuli because of its limited capacity [1]. An alternative proposal is that action planning requires the inhibition of irrelevant stimuli, which forces the nervous system to limit its processing [2]. An evolutionary approach might provide additional clues to clarify the role of selective attention. Methods We developed Artificial Life simulations wherein animals were repeatedly presented two objects, "left" and "right", each of which could be "food" or "non-food." The animals' neural networks (multilayer perceptrons) had two input nodes, one for each object, and two output nodes to determine if the animal ate each of the objects. The neural networks also had a variable number of hidden nodes, which determined whether or not it had enough capacity to process both stimuli (Table 1). The evolutionary relevance of the left and the right food objects could also vary depending on how much the animal's fitness was increased when ingesting them (Table 1). We compared sensory processing in animals with or without limited capacity, which evolved in simulations in which the objects had the same or different relevances. Table 1. Nine sets of simulations were performed, varying the values of food objects and the number of hidden nodes in the neural networks. The values of left and right food were swapped during the second half of the simulations. Non-food objects were always worth -3. The evolution of neural networks was simulated by a simple genetic algorithm. Fitness was a function of the number of food and non-food objects each animal ate and the chromosomes determined the node biases and synaptic weights. During each simulation, 10 populations of 20 individuals each evolved in parallel for 20,000 generations, then the relevance of food objects was swapped and the simulation was run again for another 20,000 generations. The neural networks were evaluated by their ability to identify the two objects correctly. The detectability (d') for the left and the right objects was calculated using Signal Detection Theory [3]. Results and conclusion When both stimuli were equally relevant, networks with two hidden nodes only processed one stimulus and ignored the other. With four or eight hidden nodes, they could correctly identify both stimuli. When the stimuli had different relevances, the d' for the most relevant stimulus was higher than the d' for the least relevant stimulus, even when the networks had four or eight hidden nodes. We conclude that selection mechanisms arose in our simulations depending not only on the size of the neuron networks but also on the stimuli's relevance for action.
Resumo:
Variable rate sprinklers (VRS) have been developed to promote localized water application of irrigated areas. In Precision Irrigation, VRS permits better control of flow adjustment and, at the same time, provides satisfactory radial distribution profiles for various pressures and flow rates are really necessary. The objective of this work was to evaluate the performance and radial distribution profiles of a developed VRS which varies the nozzle cross sectional area by moving a pin in or out using a stepper motor. Field tests were performed under different conditions of service pressure, rotation angles imposed on the pin and flow rate which resulted in maximal water throw radiuses ranging from 7.30 to 10.38 m. In the experiments in which the service pressure remained constant, the maximal throw radius varied from 7.96 to 8.91 m. Averages were used of repetitions performed under conditions without wind or with winds less than 1.3 m s-1. The VRS with the four stream deflector resulted in greater water application throw radius compared to the six stream deflector. However, the six stream deflector had greater precipitation intensities, as well as better distribution. Thus, selection of the deflector to be utilized should be based on project requirements, respecting the difference in the obtained results. With a small opening of the nozzle, the VRS produced small water droplets that visually presented applicability for foliar chemigation. Regarding the comparison between the estimated and observed flow rates, the stepper motor produced excellent results.
Resumo:
PURPOSE: To investigate the occurrence of hearing loss in individuals with HIV/AIDS and their characterization regarding type and degree. RESEARCH STRATEGY: It was conducted a systematic review of the literature found on the electronic databases PubMed, EMBASE, ADOLEC, IBECS, Web of Science, Scopus, Lilacs and SciELO. SELECTION CRITERIA: The search strategy was directed by a specific question: "Is hearing loss part of the framework of HIV/AIDS manifestations?", and the selection criteria of the studies involved coherence with the proposed theme, evidence levels 1, 2 or 3, and language (Portuguese, English and Spanish). DATA ANALYSIS: We found 698 studies. After an analysis of the title and abstract, 91 were selected for full reading. Out of these, 38 met the proposed criteria and were included on the review. RESULTS: The studies reported presence of conductive, sensorineural, and mixed hearing loss, of variable degrees and audiometric configurations, in addition to tinnitus and vestibular disorders. The etiology can be attributed to opportunistic infections, ototoxic drugs or to the action of virus itself. The auditory evoked potentials have been used as markers of neurological alterations, even in patients with normal hearing. CONCLUSION: HIV/AIDS patients may present hearing loss. Thus, programs for prevention and treatment of AIDS must involve actions aimed at auditory health.
Resumo:
We developed a stochastic lattice model to describe the vector-borne disease (like yellow fever or dengue). The model is spatially structured and its dynamical rules take into account the diffusion of vectors. We consider a bipartite lattice, forming a sub-lattice of human and another occupied by mosquitoes. At each site of lattice we associate a stochastic variable that describes the occupation and the health state of a single individual (mosquito or human). The process of disease transmission in the human population follows a similar dynamic of the Susceptible-Infected-Recovered model (SIR), while the disease transmission in the mosquito population has an analogous dynamic of the Susceptible-Infected-Susceptible model (SIS) with mosquitos diffusion. The occurrence of an epidemic is directly related to the conditional probability of occurrence of infected mosquitoes (human) in the presence of susceptible human (mosquitoes) on neighborhood. The probability of diffusion of mosquitoes can facilitate the formation of pairs Susceptible-Infected enabling an increase in the size of the epidemic. Using an asynchronous dynamic update, we study the disease transmission in a population initially formed by susceptible individuals due to the introduction of a single mosquito (human) infected. We find that this model exhibits a continuous phase transition related to the existence or non-existence of an epidemic. By means of mean field approximations and Monte Carlo simulations we investigate the epidemic threshold and the phase diagram in terms of the diffusion probability and the infection probability.
Resumo:
We consider a general class of mathematical models for stochastic gene expression where the transcription rate is allowed to depend on a promoter state variable that can take an arbitrary (finite) number of values. We provide the solution of the master equations in the stationary limit, based on a factorization of the stochastic transition matrix that separates timescales and relative interaction strengths, and we express its entries in terms of parameters that have a natural physical and/or biological interpretation. The solution illustrates the capacity of multiple states promoters to generate multimodal distributions of gene products, without the need for feedback. Furthermore, using the example of a three states promoter operating at low, high, and intermediate expression levels, we show that using multiple states operons will typically lead to a significant reduction of noise in the system. The underlying mechanism is that a three-states promoter can change its level of expression from low to high by passing through an intermediate state with a much smaller increase of fluctuations than by means of a direct transition.
Resumo:
This thesis presents a creative and practical approach to dealing with the problem of selection bias. Selection bias may be the most important vexing problem in program evaluation or in any line of research that attempts to assert causality. Some of the greatest minds in economics and statistics have scrutinized the problem of selection bias, with the resulting approaches – Rubin’s Potential Outcome Approach(Rosenbaum and Rubin,1983; Rubin, 1991,2001,2004) or Heckman’s Selection model (Heckman, 1979) – being widely accepted and used as the best fixes. These solutions to the bias that arises in particular from self selection are imperfect, and many researchers, when feasible, reserve their strongest causal inference for data from experimental rather than observational studies. The innovative aspect of this thesis is to propose a data transformation that allows measuring and testing in an automatic and multivariate way the presence of selection bias. The approach involves the construction of a multi-dimensional conditional space of the X matrix in which the bias associated with the treatment assignment has been eliminated. Specifically, we propose the use of a partial dependence analysis of the X-space as a tool for investigating the dependence relationship between a set of observable pre-treatment categorical covariates X and a treatment indicator variable T, in order to obtain a measure of bias according to their dependence structure. The measure of selection bias is then expressed in terms of inertia due to the dependence between X and T that has been eliminated. Given the measure of selection bias, we propose a multivariate test of imbalance in order to check if the detected bias is significant, by using the asymptotical distribution of inertia due to T (Estadella et al. 2005) , and by preserving the multivariate nature of data. Further, we propose the use of a clustering procedure as a tool to find groups of comparable units on which estimate local causal effects, and the use of the multivariate test of imbalance as a stopping rule in choosing the best cluster solution set. The method is non parametric, it does not call for modeling the data, based on some underlying theory or assumption about the selection process, but instead it calls for using the existing variability within the data and letting the data to speak. The idea of proposing this multivariate approach to measure selection bias and test balance comes from the consideration that in applied research all aspects of multivariate balance, not represented in the univariate variable- by-variable summaries, are ignored. The first part contains an introduction to evaluation methods as part of public and private decision process and a review of the literature of evaluation methods. The attention is focused on Rubin Potential Outcome Approach, matching methods, and briefly on Heckman’s Selection Model. The second part focuses on some resulting limitations of conventional methods, with particular attention to the problem of how testing in the correct way balancing. The third part contains the original contribution proposed , a simulation study that allows to check the performance of the method for a given dependence setting and an application to a real data set. Finally, we discuss, conclude and explain our future perspectives.
Resumo:
The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.
Resumo:
This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.
Resumo:
One of the main targets of the CMS experiment is to search for the Standard Model Higgs boson. The 4-lepton channel (from the Higgs decay h->ZZ->4l, l = e,mu) is one of the most promising. The analysis is based on the identification of two opposite-sign, same-flavor lepton pairs: leptons are required to be isolated and to come from the same primary vertex. The Higgs would be statistically revealed by the presence of a resonance peak in the 4-lepton invariant mass distribution. The 4-lepton analysis at CMS is presented, spanning on its most important aspects: lepton identification, variables of isolation, impact parameter, kinematics, event selection, background control and statistical analysis of results. The search leads to an evidence for a signal presence with a statistical significance of more than four standard deviations. The excess of data, with respect to the background-only predictions, indicates the presence of a new boson, with a mass of about 126 GeV/c2 , decaying to two Z bosons, whose characteristics are compatible with the SM Higgs ones.
Resumo:
The Large Hadron Collider, located at the CERN laboratories in Geneva, is the largest particle accelerator in the world. One of the main research fields at LHC is the study of the Higgs boson, the latest particle discovered at the ATLAS and CMS experiments. Due to the small production cross section for the Higgs boson, only a substantial statistics can offer the chance to study this particle properties. In order to perform these searches it is desirable to avoid the contamination of the signal signature by the number and variety of the background processes produced in pp collisions at LHC. Much account assumes the study of multivariate methods which, compared to the standard cut-based analysis, can enhance the signal selection of a Higgs boson produced in association with a top quark pair through a dileptonic final state (ttH channel). The statistics collected up to 2012 is not sufficient to supply a significant number of ttH events; however, the methods applied in this thesis will provide a powerful tool for the increasing statistics that will be collected during the next LHC data taking.
Recurrent antitopographic inhibition mediates competitive stimulus selection in an attention network
Resumo:
Topographically organized neurons represent multiple stimuli within complex visual scenes and compete for subsequent processing in higher visual centers. The underlying neural mechanisms of this process have long been elusive. We investigate an experimentally constrained model of a midbrain structure: the optic tectum and the reciprocally connected nucleus isthmi. We show that a recurrent antitopographic inhibition mediates the competitive stimulus selection between distant sensory inputs in this visual pathway. This recurrent antitopographic inhibition is fundamentally different from surround inhibition in that it projects on all locations of its input layer, except to the locus from which it receives input. At a larger scale, the model shows how a focal top-down input from a forebrain region, the arcopallial gaze field, biases the competitive stimulus selection via the combined activation of a local excitation and the recurrent antitopographic inhibition. Our findings reveal circuit mechanisms of competitive stimulus selection and should motivate a search for anatomical implementations of these mechanisms in a range of vertebrate attentional systems.
Resumo:
Major histocompatibility complex (MHC) antigen-presenting genes are the most variable loci in vertebrate genomes. Host-parasite co-evolution is assumed to maintain the excessive polymorphism in the MHC loci. However, the molecular mechanisms underlying the striking diversity in the MHC remain contentious. The extent to which recombination contributes to the diversity at MHC loci in natural populations is still controversial, and there have been only few comparative studies that make quantitative estimates of recombination rates. In this study, we performed a comparative analysis for 15 different ungulates species to estimate the population recombination rate, and to quantify levels of selection. As expected for all species, we observed signatures of strong positive selection, and identified individual residues experiencing selection that were congruent with those constituting the peptide-binding region of the human DRB gene. However, in addition for each species, we also observed recombination rates that were significantly different from zero on the basis of likelihood-permutation tests, and in other non-quantitative analyses. Patterns of synonymous and non-synonymous sequence diversity were consistent with differing demographic histories between species, but recent simulation studies by other authors suggest inference of selection and recombination is likely to be robust to such deviations from standard models. If high rates of recombination are common in MHC genes of other taxa, re-evaluation of many inference-based phylogenetic analyses of MHC loci, such as estimates of the divergence time of alleles and trans-specific polymorphism, may be required.
Resumo:
The majority of mutations that cause isolated GH deficiency type II (IGHD II) affect splicing of GH-1 transcripts and produce a dominant-negative GH isoform lacking exon 3 resulting in a 17.5-kDa isoform, which further leads to disruption of the GH secretory pathway. A clinical variability in the severity of the IGHD II phenotype depending on the GH-1 gene alteration has been reported, and in vitro and transgenic animal data suggest that the onset and severity of the phenotype relates to the proportion of 17.5-kDa produced. The removal of GH in IGHD creates a positive feedback loop driving more GH expression, which may itself increase 17.5-kDa isoform productions from alternate splice sites in the mutated GH-1 allele. In this study, we aimed to test this idea by comparing the impact of stimulated expression by glucocorticoids on the production of different GH isoforms from wild-type (wt) and mutant GH-1 genes, relying on the glucocorticoid regulatory element within intron 1 in the GH-1 gene. AtT-20 cells were transfected with wt-GH or mutated GH-1 variants (5'IVS-3 + 2-bp T->C; 5'IVS-3 + 6 bp T->C; ISEm1: IVS-3 + 28 G->A) known to cause clinical IGHD II of varying severity. Cells were stimulated with 1 and 10 mum dexamethasone (DEX) for 24 h, after which the relative amounts of GH-1 splice variants were determined by semiquantitative and quantitative (TaqMan) RT-PCR. In the absence of DEX, only around 1% wt-GH-1 transcripts were the 17.5-kDa isoform, whereas the three mutant GH-1 variants produced 29, 39, and 78% of the 17.5-kDa isoform. DEX stimulated total GH-1 gene transcription from all constructs. Notably, however, DEX increased the amount of 17.5-kDa GH isoform relative to the 22- and 20-kDa isoforms produced from the mutated GH-1 variants, but not from wt-GH-1. This DEX-induced enhancement of 17.5-kDa GH isoform production, up to 100% in the most severe case, was completely blocked by the addition of RU486. In other studies, we measured cell proliferation rates, annexin V staining, and DNA fragmentation in cells transfected with the same GH-1 constructs. The results showed that that the 5'IVS-3 + 2-bp GH-1 gene mutation had a more severe impact on those measures than the splice site mutations within 5'IVS-3 + 6 bp or ISE +28, in line with the clinical severity observed with these mutations. Our findings that the proportion of 17.5-kDa produced from mutant GH-1 alleles increases with increased drive for gene expression may help to explain the variable onset progression, and severity observed in IGHD II.
Resumo:
In a matched experimental design, the effectiveness of matching in reducing bias and increasing power depends on the strength of the association between the matching variable and the outcome of interest. In particular, in the design of a community health intervention trial, the effectiveness of a matched design, where communities are matched according to some community characteristic, depends on the strength of the correlation between the matching characteristic and the change in the health behavior being measured. We attempt to estimate the correlation between community characteristics and changes in health behaviors in four datasets from community intervention trials and observational studies. Community characteristics that are highly correlated with changes in health behaviors would potentially be effective matching variables in studies of health intervention programs designed to change those behaviors. Among the community characteristics considered, the urban-rural character of the community was the most highly correlated with changes in health behaviors. The correlations between Per Capita Income, Percent Low Income & Percent aged over 65 and changes in health behaviors were marginally statistically significant (p < 0.08).
Resumo:
This paper presents a novel variable decomposition approach for pose recovery of the distal locking holes using single calibrated fluoroscopic image. The problem is formulated as a model-based optimal fitting process, where the control variables are decomposed into two sets: (a) the angle between the nail axis and its projection on the imaging plane, and (b) the translation and rotation of the geometrical model of the distal locking hole around the nail axis. By using an iterative algorithm to find the optimal values of the latter set of variables for any given value of the former variable, we reduce the multiple-dimensional model-based optimal fitting problem to a one-dimensional search along a finite interval. We report the results of our in vitro experiments, which demonstrate that the accuracy of our approach is adequate for successful distal locking of intramedullary nails.