902 resultados para Instrumental Variable
Resumo:
Recent studies have noted that vertex degree in the autonomous system (AS) graph exhibits a highly variable distribution [15, 22]. The most prominent explanatory model for this phenomenon is the Barabási-Albert (B-A) model [5, 2]. A central feature of the B-A model is preferential connectivity—meaning that the likelihood a new node in a growing graph will connect to an existing node is proportional to the existing node’s degree. In this paper we ask whether a more general explanation than the B-A model, and absent the assumption of preferential connectivity, is consistent with empirical data. We are motivated by two observations: first, AS degree and AS size are highly correlated [11]; and second, highly variable AS size can arise simply through exponential growth. We construct a model incorporating exponential growth in the size of the Internet, and in the number of ASes. We then show via analysis that such a model yields a size distribution exhibiting a power-law tail. In such a model, if an AS’s link formation is roughly proportional to its size, then AS degree will also show high variability. We instantiate such a model with empirically derived estimates of growth rates and show that the resulting degree distribution is in good agreement with that of real AS graphs.
Resumo:
This paper proposes a method for detecting shapes of variable structure in images with clutter. The term "variable structure" means that some shape parts can be repeated an arbitrary number of times, some parts can be optional, and some parts can have several alternative appearances. The particular variation of the shape structure that occurs in a given image is not known a priori. Existing computer vision methods, including deformable model methods, were not designed to detect shapes of variable structure; they may only be used to detect shapes that can be decomposed into a fixed, a priori known, number of parts. The proposed method can handle both variations in shape structure and variations in the appearance of individual shape parts. A new class of shape models is introduced, called Hidden State Shape Models, that can naturally represent shapes of variable structure. A detection algorithm is described that finds instances of such shapes in images with large amounts of clutter by finding globally optimal correspondences between image features and shape models. Experiments with real images demonstrate that our method can localize plant branches that consist of an a priori unknown number of leaves and can detect hands more accurately than a hand detector based on the chamfer distance.
Resumo:
Speech can be understood at widely varying production rates. A working memory is described for short-term storage of temporal lists of input items. The working memory is a cooperative-competitive neural network that automatically adjusts its integration rate, or gain, to generate a short-term memory code for a list that is independent of item presentation rate. Such an invariant working memory model is used to simulate data of Repp (1980) concerning the changes of phonetic category boundaries as a function of their presentation rate. Thus the variability of categorical boundaries can be traced to the temporal in variance of the working memory code.
Resumo:
The objective of this paper is to investigate the effect of the pad size ratio between the chip and board end of a solder joint on the shape of that solder joint in combination with the solder volume available. The shape of the solder joint is correlated to its reliability and thus of importance. For low density chip bond pad applications Flip Chip (FC) manufacturing costs can be kept down by using larger size board pads suitable for solder application. By using “Surface Evolver” software package the solder joint shapes associated with different size/shape solder preforms and chip/board pad ratios are predicted. In this case a so called Flip-Chip Over Hole (FCOH) assembly format has been used. Assembly trials involved the deposition of lead-free 99.3Sn0.7Cu solder on the board side, followed by reflow, an underfill process and back die encapsulation. During the assembly work pad off-sets occurred that have been taken into account for the Surface Evolver solder joint shape prediction and accurately matched the real assembly. Overall, good correlation was found between the simulated solder joint shape and the actual fabricated solder joint shapes. Solder preforms were found to exhibit better control over the solder volume. Reflow simulation of commercially available solder preform volumes suggests that for a fixed stand-off height and chip-board pad ratio, the solder volume value and the surface tension determines the shape of the joint.
Resumo:
The last 30 years have seen Fuzzy Logic (FL) emerging as a method either complementing or challenging stochastic methods as the traditional method of modelling uncertainty. But the circumstances under which FL or stochastic methods should be used are shrouded in disagreement, because the areas of application of statistical and FL methods are overlapping with differences in opinion as to when which method should be used. Lacking are practically relevant case studies comparing these two methods. This work compares stochastic and FL methods for the assessment of spare capacity on the example of pharmaceutical high purity water (HPW) utility systems. The goal of this study was to find the most appropriate method modelling uncertainty in industrial scale HPW systems. The results provide evidence which suggests that stochastic methods are superior to the methods of FL in simulating uncertainty in chemical plant utilities including HPW systems in typical cases whereby extreme events, for example peaks in demand, or day-to-day variation rather than average values are of interest. The average production output or other statistical measures may, for instance, be of interest in the assessment of workshops. Furthermore the results indicate that the stochastic model should be used only if found necessary by a deterministic simulation. Consequently, this thesis concludes that either deterministic or stochastic methods should be used to simulate uncertainty in chemical plant utility systems and by extension some process system because extreme events or the modelling of day-to-day variation are important in capacity extension projects. Other reasons supporting the suggestion that stochastic HPW models are preferred to FL HPW models include: 1. The computer code for stochastic models is typically less complex than a FL models, thus reducing code maintenance and validation issues. 2. In many respects FL models are similar to deterministic models. Thus the need for a FL model over a deterministic model is questionable in the case of industrial scale HPW systems as presented here (as well as other similar systems) since the latter requires simpler models. 3. A FL model may be difficult to "sell" to an end-user as its results represent "approximate reasoning" a definition of which is, however, lacking. 4. Stochastic models may be applied with some relatively minor modifications on other systems, whereas FL models may not. For instance, the stochastic HPW system could be used to model municipal drinking water systems, whereas the FL HPW model should or could not be used on such systems. This is because the FL and stochastic model philosophies of a HPW system are fundamentally different. The stochastic model sees schedule and volume uncertainties as random phenomena described by statistical distributions based on either estimated or historical data. The FL model, on the other hand, simulates schedule uncertainties based on estimated operator behaviour e.g. tiredness of the operators and their working schedule. But in a municipal drinking water distribution system the notion of "operator" breaks down. 5. Stochastic methods can account for uncertainties that are difficult to model with FL. The FL HPW system model does not account for dispensed volume uncertainty, as there appears to be no reasonable method to account for it with FL whereas the stochastic model includes volume uncertainty.
Resumo:
Instrumental music education is provided as an extra-curricular activity on a fee-paying basis by a small number of Education and Training Boards, formerly Vocational Education Committees (ETB/VECs) through specialist instrumental Music Services. Although all citizens’ taxes fund the public music provision, participation in instrumental music during school-going years is predominantly accessed by middle class families. A series of semistructured interviews sought to access the perceptions and beliefs of instrumental music education practitioners (N=14) in seven publicly-funded music services in Ireland. Canonical dispositions were interrogated and emergent themes were coded and analysed in a process of Grounded theory. The study draws on Foucault’s conception of discourse as a lens with which to map professional practices, and utilises Bourdieu’s analysis of the reproduction of social advantage to examine cultural assumptions, which may serve to privilege middle-class cultural choice to the exclusion of other social groups. Study findings show that within the Music Services, aesthetic and pedagogic discourses of the 19th century Conservatory system exert a hegemonic influence over policy and practice. An enduring ‘examination culture’ located within the Western art music tradition determines pedagogy, musical genre, and assessment procedures. Ideologies of musical taste and value reinforce the more tangible boundaries of fee-payment and restricted availability as barriers to access. Practitioners are aware of a status duality whereby instrumental teachers working as visiting specialists in primary schools experience a conflict between specialist and generalist educational aims. Nevertheless, study participants consistently advocated siting the point of access to instrumental music education in the primary schools as the most equitable means of access to instrumental music education. This study addresses a ‘knowledge gap’ in the sociology of music education in Ireland. It provides a framework for rethinking instrumental music education as equitable in-school musical participation. The conclusions of the study suggest starting-points for further educational research and may provide key ‘prompts’ for curriculum planning.
Resumo:
Consensus HIV-1 genes can decrease the genetic distances between candidate immunogens and field virus strains. To ensure the functionality and optimal presentation of immunologic epitopes, we generated two group-M consensus env genes that contain variable regions either from a wild-type B/C recombinant virus isolate (CON6) or minimal consensus elements (CON-S) in the V1, V2, V4, and V5 regions. C57BL/6 and BALB/c mice were primed twice with CON6, CON-S, and subtype control (92UG37_A and HXB2/Bal_B) DNA and boosted with recombinant vaccinia virus (rVV). Mean antibody titers against 92UG37_A, 89.6_B, 96ZM651_C, CON6, and CON-S Env protein were determined. Both CON6 and CON-S induced higher mean antibody titers against several of the proteins, as compared with the subtype controls. However, no significant differences were found in mean antibody titers in animals immunized with CON6 or CON-S. Cellular immune responses were measured by using five complete Env overlapping peptide sets: subtype A (92UG37_A), subtype B (MN_B, 89.6_B and SF162_B), and subtype C (Chn19_C). The intensity of the induced cellular responses was measured by using pooled Env peptides; T-cell epitopes were identified by using matrix peptide pools and individual peptides. No significant differences in T-cell immune-response intensities were noted between CON6 and CON-S immunized BALB/c and C57BL/6 mice. In BALB/c mice, 10 and eight nonoverlapping T-cell epitopes were identified in CON6 and CON-S, whereas eight epitopes were identified in 92UG37_A and HXB2/BAL_B. In C57BL/6 mice, nine and six nonoverlapping T-cell epitopes were identified after immunization with CON6 and CON-S, respectively, whereas only four and three were identified in 92UG37_A and HXB2/BAL_B, respectively. When combined together from both mouse strains, 18 epitopes were identified. The group M artificial consensus env genes, CON6 and CON-S, were equally immunogenic in breadth and intensity for inducing humoral and cellular immune responses.
Resumo:
We consider the problem of variable selection in regression modeling in high-dimensional spaces where there is known structure among the covariates. This is an unconventional variable selection problem for two reasons: (1) The dimension of the covariate space is comparable, and often much larger, than the number of subjects in the study, and (2) the covariate space is highly structured, and in some cases it is desirable to incorporate this structural information in to the model building process. We approach this problem through the Bayesian variable selection framework, where we assume that the covariates lie on an undirected graph and formulate an Ising prior on the model space for incorporating structural information. Certain computational and statistical problems arise that are unique to such high-dimensional, structured settings, the most interesting being the phenomenon of phase transitions. We propose theoretical and computational schemes to mitigate these problems. We illustrate our methods on two different graph structures: the linear chain and the regular graph of degree k. Finally, we use our methods to study a specific application in genomics: the modeling of transcription factor binding sites in DNA sequences. © 2010 American Statistical Association.
Resumo:
This paper studies the multiplicity-correction effect of standard Bayesian variable-selection priors in linear regression. Our first goal is to clarify when, and how, multiplicity correction happens automatically in Bayesian analysis, and to distinguish this correction from the Bayesian Ockham's-razor effect. Our second goal is to contrast empirical-Bayes and fully Bayesian approaches to variable selection through examples, theoretical results and simulations. Considerable differences between the two approaches are found. In particular, we prove a theorem that characterizes a surprising aymptotic discrepancy between fully Bayes and empirical Bayes. This discrepancy arises from a different source than the failure to account for hyperparameter uncertainty in the empirical-Bayes estimate. Indeed, even at the extreme, when the empirical-Bayes estimate converges asymptotically to the true variable-inclusion probability, the potential for a serious difference remains. © Institute of Mathematical Statistics, 2010.
Resumo:
Antigenically variable RNA viruses are significant contributors to the burden of infectious disease worldwide. One reason for their ubiquity is their ability to escape herd immunity through rapid antigenic evolution and thereby to reinfect previously infected hosts. However, the ways in which these viruses evolve antigenically are highly diverse. Some have only limited diversity in the long-run, with every emergence of a new antigenic variant coupled with a replacement of the older variant. Other viruses rapidly accumulate antigenic diversity over time. Others still exhibit dynamics that can be considered evolutionary intermediates between these two extremes. Here, we present a theoretical framework that aims to understand these differences in evolutionary patterns by considering a virus's epidemiological dynamics in a given host population. Our framework, based on a dimensionless number, probabilistically anticipates patterns of viral antigenic diversification and thereby quantifies a virus's evolutionary potential. It is therefore similar in spirit to the basic reproduction number, the well-known dimensionless number which quantifies a pathogen's reproductive potential. We further outline how our theoretical framework can be applied to empirical viral systems, using influenza A/H3N2 as a case study. We end with predictions of our framework and work that remains to be done to further integrate viral evolutionary dynamics with disease ecology.
Resumo:
Detailed phenotypic characterization of B cell subpopulations is of utmost importance for the diagnosis and management of humoral immunodeficiencies, as they are used for classification of common variable immunodeficiencies. Since age-specific reference values remain scarce in the literature, we analysed by flow cytometry the proportions and absolute values of total, memory, switched memory and CD21(-/low) B cells in blood samples from 168 healthy children (1 day to 18 years) with special attention to the different subpopulations of CD21(low) B cells. The percentages of total memory B cells and their subsets significantly increased up to 5-10 years. In contrast, the percentages of immature CD21(-) B cells and of immature transitional CD21(low)CD38(hi) B cells decreased progressively with age, whereas the percentage of CD21(low) CD38(low) B cells remained stable during childhood. Our data stress the importance of age-specific reference values for the correct interpretation of B cell subsets in children as a diagnostic tool in immunodeficiencies.
Resumo:
p.299-307
Resumo:
En el análisis del discurso matemático manifiesto en un texto de álgebra escolar, hemos encontrado que el dominio de la variable es un concepto presente desde la aparición de las expresiones generalizadoras de operaciones, relaciones y propiedades de los números reales, que tan sólo se explicita en el estudio del álgebra de las expresiones algebraicas. Este concepto, junto con el de conjunto de referencia de una expresión y con el de conjunto solución, juega un papel protagónico en diferentes contextos del álgebra escolar, que le permiten configurarse como una variable didáctica imprescindible en la significación de muchos otros conceptos algebraicos.
Resumo:
La tecnología puede resultar un recurso didáctico para que los estudiantes examinen situaciones y problemas desde diversos ángulos, específicamente, el uso de software dinámico ofrece un medio útil para que ellos visualicen, exploren y construyan relaciones matemáticas. Estos apoyos modifican tan fuertemente el medio ambiente de trabajo que no basta con adaptar situaciones matemáticas clásicas, hay que concebir nuevas situaciones que tomen en consideración las potencialidades y las restricciones de la tecnología. Esto ha llevado a la creación de una génesis instrumental que estudia la construcción hecha por el estudiante cuando interactúa con un artefacto, convirtiéndolo en instrumento, a través de un proceso, de manera tal que se lo apropia y lo hace parte de su actividad matemática, actividad que en esta investigación está relacionada con el desarrollo del pensamiento covariacional.
Resumo:
Este documento centra su atención en la noción de variable como elemento básico de la construcción de conceptos relacionados a fenómenos de variación y cambio. Partimos de que la variable no es una idea construida como un objeto o proceso aislado, sino que surge necesariamente de la relación de al menos dos entidades cambiantes que en la mayoría de los casos una de ellas es la variable tiempo. Pretendemos realizar el estudio de la variable desde diferentes dimensiones: la epistemológica, la cognitiva, la didáctica y la sociocultural, para poder tener elementos que nos permitan determinar qué procesos favorecen la construcción de esta noción y asimismo realizar su caracterización.