976 resultados para Set-Valued Functions
Resumo:
Balance functions have been measured for charged-particle pairs, identified charged-pion pairs, and identified charged-kaon pairs in Au + Au, d + Au, and p + p collisions at root s(NN) = 200 GeV at the Relativistic Heavy Ion Collider using the STAR detector. These balance functions are presented in terms of relative pseudorapidity, Delta eta, relative rapidity, Delta y, relative azimuthal angle, Delta phi, and invariant relative momentum, q(inv). For charged-particle pairs, the width of the balance function in terms of Delta eta scales smoothly with the number of participating nucleons, while HIJING and UrQMD model calculations show no dependence on centrality or system size. For charged-particle and charged-pion pairs, the balance functions widths in terms of Delta eta and Delta y are narrower in central Au + Au collisions than in peripheral collisions. The width for central collisions is consistent with thermal blast-wave models where the balancing charges are highly correlated in coordinate space at breakup. This strong correlation might be explained by either delayed hadronization or limited diffusion during the reaction. Furthermore, the narrowing trend is consistent with the lower kinetic temperatures inherent to more central collisions. In contrast, the width of the balance function for charged-kaon pairs in terms of Delta y shows little centrality dependence, which may signal a different production mechanism for kaons. The widths of the balance functions for charged pions and kaons in terms of q(inv) narrow in central collisions compared to peripheral collisions, which may be driven by the change in the kinetic temperature.
Resumo:
The STAR Collaboration at the BNL Relativistic Heavy Ion Collider has measured two-pion correlation functions from p + p collisions at root s = 200 GeV. Spatial scales are extracted via a femtoscopic analysis of the correlations, though this analysis is complicated by the presence of strong nonfemtoscopic effects. Our results are put into the context of the world data set of femtoscopy in hadron-hadron collisions. We present the first direct comparison of femtoscopy in p + p and heavy ion collisions, under identical analysis and detector conditions.
Resumo:
Background: The inference of gene regulatory networks (GRNs) from large-scale expression profiles is one of the most challenging problems of Systems Biology nowadays. Many techniques and models have been proposed for this task. However, it is not generally possible to recover the original topology with great accuracy, mainly due to the short time series data in face of the high complexity of the networks and the intrinsic noise of the expression measurements. In order to improve the accuracy of GRNs inference methods based on entropy (mutual information), a new criterion function is here proposed. Results: In this paper we introduce the use of generalized entropy proposed by Tsallis, for the inference of GRNs from time series expression profiles. The inference process is based on a feature selection approach and the conditional entropy is applied as criterion function. In order to assess the proposed methodology, the algorithm is applied to recover the network topology from temporal expressions generated by an artificial gene network (AGN) model as well as from the DREAM challenge. The adopted AGN is based on theoretical models of complex networks and its gene transference function is obtained from random drawing on the set of possible Boolean functions, thus creating its dynamics. On the other hand, DREAM time series data presents variation of network size and its topologies are based on real networks. The dynamics are generated by continuous differential equations with noise and perturbation. By adopting both data sources, it is possible to estimate the average quality of the inference with respect to different network topologies, transfer functions and network sizes. Conclusions: A remarkable improvement of accuracy was observed in the experimental results by reducing the number of false connections in the inferred topology by the non-Shannon entropy. The obtained best free parameter of the Tsallis entropy was on average in the range 2.5 <= q <= 3.5 (hence, subextensive entropy), which opens new perspectives for GRNs inference methods based on information theory and for investigation of the nonextensivity of such networks. The inference algorithm and criterion function proposed here were implemented and included in the DimReduction software, which is freely available at http://sourceforge.net/projects/dimreduction and http://code.google.com/p/dimreduction/.
Resumo:
The problem of semialgebraic Lipschitz classification of quasihomogeneous polynomials on a Holder triangle is studied. For this problem, the ""moduli"" are described completely in certain combinatorial terms.
Resumo:
The low-lying doublet and quartet electronic states of the species SeF correlating with the first dissociation channel are investigated theoretically at a high-level of electronic correlation treatment, namely, the complete active space self-consistent field/multireference single and double excitations configuration interaction (CASSCF/MRSDCI) using a quintuple-zeta quality basis set including a relativistic effective core potential for the selenium atom. Potential energy curves for (Lambda+S) states and the corresponding spectroscopic properties are derived that allows for an unambiguous assignment of the only spectrum known experimentally as due to a spin-forbidden X (2)Pi-a (4)Sigma(-) transition, and not a A (2)Pi-X (2)Pi transition as assumed so far. For the bound excited doublets, yet unknown experimentally, this study is the first theoretical characterization of their spectroscopic properties. Also the spin-orbit coupling constant function for the X (2)Pi state is derived as well as the spin-orbit coupling matrix element between the X (2)Pi and a (4)Sigma(-) states. Dipole moment functions and vibrationally averaged dipole moments show SeF to be a very polar species. An overview of the lowest-lying spin-orbit (Omega) states completes this description. (C) 2010 American Institute of Physics. [doi: 10.1063/1.3426315]
Resumo:
In this paper, we study the generic hyperbolicity of equilibria of a reaction-diffusion system with respect to nonlinear terms in the set of C(2)-functions equipped with the Whitney Topology. To accomplish this, we combine Baire`s Lemma and the usual Transversality Theorem. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The taxonomy of the N(2)-fixing bacteria belonging to the genus Bradyrhizobium is still poorly refined, mainly due to conflicting results obtained by the analysis of the phenotypic and genotypic properties. This paper presents an application of a method aiming at the identification of possible new clusters within a Brazilian collection of 119 Bradryrhizobium strains showing phenotypic characteristics of B. japonicum and B. elkanii. The stability was studied as a function of the number of restriction enzymes used in the RFLP-PCR analysis of three ribosomal regions with three restriction enzymes per region. The method proposed here uses Clustering algorithms with distances calculated by average-linkage clustering. Introducing perturbations using sub-sampling techniques makes the stability analysis. The method showed efficacy in the grouping of the species B. japonicum and B. elkanii. Furthermore, two new clusters were clearly defined, indicating possible new species, and sub-clusters within each detected cluster. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The general flowshop scheduling problem is a production problem where a set of n jobs have to be processed with identical flow pattern on in machines. In permutation flowshops the sequence of jobs is the same on all machines. A significant research effort has been devoted for sequencing jobs in a flowshop minimizing the makespan. This paper describes the application of a Constructive Genetic Algorithm (CGA) to makespan minimization on flowshop scheduling. The CGA was proposed recently as an alternative to traditional GA approaches, particularly, for evaluating schemata directly. The population initially formed only by schemata, evolves controlled by recombination to a population of well-adapted structures (schemata instantiation). The CGA implemented is based on the NEH classic heuristic and a local search heuristic used to define the fitness functions. The parameters of the CGA are calibrated using a Design of Experiments (DOE) approach. The computational results are compared against some other successful algorithms from the literature on Taillard`s well-known standard benchmark. The computational experience shows that this innovative CGA approach provides competitive results for flowshop scheduling; problems. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Although the Hertz theory is not applicable in the analysis of the indentation of elastic-plastic materials, it is common practice to incorporate the concept of indenter/specimen combined modulus to consider indenter deformation. The appropriateness was assessed of the use of reduced modulus to incorporate the effect of indenter deformation in the analysis of the indentation with spherical indenters. The analysis based on finite element simulations considered four values of the ratio of the indented material elastic modulus to that of the diamond indenter, E/E(i) (0, 0.04, 0.19, 0.39), four values of the ratio of the elastic reduced modulus to the initial yield strength, E(r)/Y (0, 10, 20, 100), and two values of the ratio of the indenter radius to maximum total displacement, R/delta(max) (3, 10). Indenter deformation effects are better accounted for by the reduced modulus if the indented material behaves entirely elastically. In this case, identical load-displacement (P - delta) curves are obtained with rigid and elastic spherical indenters for the same elastic reduced modulus. Changes in the ratio E/E(i), from 0 to 0.39, resulted in variations lower than 5% for the load dimensionless functions, lower than 3% in the contact area, A(c), and lower than 5% in the ratio H/E(r). However, deformations of the elastic indenter made the actual radius of contact change, even in the indentation of elastic materials. Even though the load dimensionless functions showed only a little increase with the ratio E/E(i), the hardening coefficient and the yield strength could be slightly overestimated when algorithms based on rigid indenters are used. For the unloading curves, the ratio delta(e)/delta(max), where delta(e) is the point corresponding to zero load of a straight line with slope S from the point (P(max), delta(max)), varied less than 5% with the ratio E/E(i). Similarly, the relationship between reduced modulus and the unloading indentation curve, expressed by Sneddon`s equation, did not reveal the necessity of correction with the ratio E/E(i). The most affected parameter in the indentation curve, as a consequence of the indentation deformation, was the ratio between the residual indentation depth after complete unloading and the maximum indenter displacement, delta(r)/delta(max) (up to 26%), but this variation did not significantly decrease the capability to estimate hardness and elastic modulus based on the ratio of the residual indentation depth to maximum indentation depth, h(r)/h(max). In general, the results confirm the convenience of the use of the reduced modulus in the spherical instrumented indentation tests.
Resumo:
We examine the representation of judgements of stochastic independence in probabilistic logics. We focus on a relational logic where (i) judgements of stochastic independence are encoded by directed acyclic graphs, and (ii) probabilistic assessments are flexible in the sense that they are not required to specify a single probability measure. We discuss issues of knowledge representation and inference that arise from our particular combination of graphs, stochastic independence, logical formulas and probabilistic assessments. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
This paper presents the design of a low cost accessible digital television set-top box. This set-top box was designed and tested to the International ISDB-T system and considered the adoption of solutions that would provide accessible services in digital television in the simplest digital television receiver. The accessible set-top box was evaluated regarding the processing and memory requirements impacts to provide the features for accessible services. The work presents also the access services bandwidth consumption analysis(1).
Resumo:
This paper considers two aspects of the nonlinear H(infinity) control problem: the use of weighting functions for performance and robustness improvement, as in the linear case, and the development of a successive Galerkin approximation method for the solution of the Hamilton-Jacobi-Isaacs equation that arises in the output-feedback case. Design of nonlinear H(infinity) controllers obtained by the well-established Taylor approximation and by the proposed Galerkin approximation method applied to a magnetic levitation system are presented for comparison purposes.
Resumo:
This paper presents two strategies for the upgrade of set-up generation systems for tandem cold mills. Even though these mills have been modernized mainly due to quality requests, their upgrades may be made intending to replace pre-calculated reference tables. In this case, Bryant and Osborn mill model without adaptive technique is proposed. As a more demanding modernization, Bland and Ford model including adaptation is recommended, although it requires a more complex computational hardware. Advantages and disadvantages of these two systems are compared and discussed and experimental results obtained from an industrial cold mill are shown.
Resumo:
This paper deals with the expected discounted continuous control of piecewise deterministic Markov processes (PDMP`s) using a singular perturbation approach for dealing with rapidly oscillating parameters. The state space of the PDMP is written as the product of a finite set and a subset of the Euclidean space a""e (n) . The discrete part of the state, called the regime, characterizes the mode of operation of the physical system under consideration, and is supposed to have a fast (associated to a small parameter epsilon > 0) and a slow behavior. By using a similar approach as developed in Yin and Zhang (Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach, Applications of Mathematics, vol. 37, Springer, New York, 1998, Chaps. 1 and 3) the idea in this paper is to reduce the number of regimes by considering an averaged model in which the regimes within the same class are aggregated through the quasi-stationary distribution so that the different states in this class are replaced by a single one. The main goal is to show that the value function of the control problem for the system driven by the perturbed Markov chain converges to the value function of this limit control problem as epsilon goes to zero. This convergence is obtained by, roughly speaking, showing that the infimum and supremum limits of the value functions satisfy two optimality inequalities as epsilon goes to zero. This enables us to show the result by invoking a uniqueness argument, without needing any kind of Lipschitz continuity condition.
Resumo:
Recently, the development of industrial processes brought on the outbreak of technologically complex systems. This development generated the necessity of research relative to the mathematical techniques that have the capacity to deal with project complexities and validation. Fuzzy models have been receiving particular attention in the area of nonlinear systems identification and analysis due to it is capacity to approximate nonlinear behavior and deal with uncertainty. A fuzzy rule-based model suitable for the approximation of many systems and functions is the Takagi-Sugeno (TS) fuzzy model. IS fuzzy models are nonlinear systems described by a set of if then rules which gives local linear representations of an underlying system. Such models can approximate a wide class of nonlinear systems. In this paper a performance analysis of a system based on IS fuzzy inference system for the calibration of electronic compass devices is considered. The contribution of the evaluated IS fuzzy inference system is to reduce the error obtained in data acquisition from a digital electronic compass. For the reliable operation of the TS fuzzy inference system, adequate error measurements must be taken. The error noise must be filtered before the application of the IS fuzzy inference system. The proposed method demonstrated an effectiveness of 57% at reducing the total error based on considered tests. (C) 2011 Elsevier Ltd. All rights reserved.