920 resultados para Sub-registry. Empirical bayesian estimator. General equation. Balancing adjustment factor
Resumo:
There are many arguments in the literature on environmental management stating that companies that have a significant environmental performance tend to be more competitive, because environmental management tends to generate positive effects on their operational performance. Despite the fact that such arguments are widely accepted, there is little empirical evidence yet of such a relationship in manufacturing contexts that are rarely studied thus far, such as those of developing countries. The paper aims to discuss these issues. Design/methodology/approach – With the objective of testing the positive relationship between environmental performance and operational performance, this research presents the data of a survey conducted with 75 ISO 9001-certified Brazilian companies. Such data were analyzed by means of structural equation modeling. Findings – The paper discovered that, indeed, environmental management relates in a positive, significant manner and large effect to the operational performance of companies.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Wavelength-routed networks (WRN) are very promising candidates for next-generation Internet and telecommunication backbones. In such a network, optical-layer protection is of paramount importance due to the risk of losing large amounts of data under a failure. To protect the network against this risk, service providers usually provide a pair of risk-independent working and protection paths for each optical connection. However, the investment made for the optical-layer protection increases network cost. To reduce the capital expenditure, service providers need to efficiently utilize their network resources. Among all the existing approaches, shared-path protection has proven to be practical and cost-efficient [1]. In shared-path protection, several protection paths can share a wavelength on a fiber link if their working paths are risk-independent. In real-world networks, provisioning is usually implemented without the knowledge of future network resource utilization status. As the network changes with the addition and deletion of connections, the network utilization will become sub-optimal. Reconfiguration, which is referred to as the method of re-provisioning the existing connections, is an attractive solution to fill in the gap between the current network utilization and its optimal value [2]. In this paper, we propose a new shared-protection-path reconfiguration approach. Unlike some of previous reconfiguration approaches that alter the working paths, our approach only changes protection paths, and hence does not interfere with the ongoing services on the working paths, and is therefore risk-free. Previous studies have verified the benefits arising from the reconfiguration of existing connections [2] [3] [4]. Most of them are aimed at minimizing the total used wavelength-links or ports. However, this objective does not directly relate to cost saving because minimizing the total network resource consumption does not necessarily maximize the capability of accommodating future connections. As a result, service providers may still need to pay for early network upgrades. Alternatively, our proposed shared-protection-path reconfiguration approach is based on a load-balancing objective, which minimizes the network load distribution vector (LDV, see Section 2). This new objective is designed to postpone network upgrades, thus bringing extra cost savings to service providers. In other words, by using the new objective, service providers can establish as many connections as possible before network upgrades, resulting in increased revenue. We develop a heuristic load-balancing (LB) reconfiguration approach based on this new objective and compare its performance with an approach previously introduced in [2] and [4], whose objective is minimizing the total network resource consumption.
Resumo:
In this paper we propose a hybrid hazard regression model with threshold stress which includes the proportional hazards and the accelerated failure time models as particular cases. To express the behavior of lifetimes the generalized-gamma distribution is assumed and an inverse power law model with a threshold stress is considered. For parameter estimation we develop a sampling-based posterior inference procedure based on Markov Chain Monte Carlo techniques. We assume proper but vague priors for the parameters of interest. A simulation study investigates the frequentist properties of the proposed estimators obtained under the assumption of vague priors. Further, some discussions on model selection criteria are given. The methodology is illustrated on simulated and real lifetime data set.
Resumo:
The present paper aims at contributing to a discussion, opened by several authors, on the proper equation of motion that governs the vertical collapse of buildings. The most striking and tragic example is that of the World Trade Center Twin Towers, in New York City, about 10 years ago. This is a very complex problem and, besides dynamics, the analysis involves several areas of knowledge in mechanics, such as structural engineering, materials sciences, and thermodynamics, among others. Therefore, the goal of this work is far from claiming to deal with the problem in its completeness, leaving aside discussions about the modeling of the resistive load to collapse, for example. However, the following analysis, restricted to the study of motion, shows that the problem in question holds great similarity to the classic falling-chain problem, very much addressed in a number of different versions as the pioneering one, by von Buquoy or the one by Cayley. Following previous works, a simple single-degree-of-freedom model was readdressed and conceptually discussed. The form of Lagrange's equation, which leads to a proper equation of motion for the collapsing building, is a general and extended dissipative form, which is proper for systems with mass varying explicitly with position. The additional dissipative generalized force term, which was present in the extended form of the Lagrange equation, was shown to be derivable from a Rayleigh-like energy function. DOI: 10.1061/(ASCE)EM.1943-7889.0000453. (C) 2012 American Society of Civil Engineers.
Resumo:
In accelerating dark energy models, the estimates of the Hubble constant, Ho, from Sunyaev-Zerdovich effect (SZE) and X-ray surface brightness of galaxy clusters may depend on the matter content (Omega(M)), the curvature (Omega(K)) and the equation of state parameter GO. In this article, by using a sample of 25 angular diameter distances of galaxy clusters described by the elliptical beta model obtained through the SZE/X-ray technique, we constrain Ho in the framework of a general ACDM model (arbitrary curvature) and a flat XCDM model with a constant equation of state parameter omega = p(x)/rho(x). In order to avoid the use of priors in the cosmological parameters, we apply a joint analysis involving the baryon acoustic oscillations (BA()) and the (MB Shift Parameter signature. By taking into account the statistical and systematic errors of the SZE/X-ray technique we obtain for nonflat ACDM model H-0 = 74(-4.0)(+5.0) km s(-1) Mpc(-1) (1 sigma) whereas for a fiat universe with constant equation of state parameter we find H-0 = 72(-4.0)(+5.5) km s(-1) Mpc(-1)(1 sigma). By assuming that galaxy clusters are described by a spherical beta model these results change to H-0 = 6(-7.0)(+8.0) and H-0 = 59(-6.0)(+9.0) km s(-1) Mpc(-1)(1 sigma), respectively. The results from elliptical description are in good agreement with independent studies from the Hubble Space Telescope key project and recent estimates based on the Wilkinson Microwave Anisotropy Probe, thereby suggesting that the combination of these three independent phenomena provides an interesting method to constrain the Bubble constant. As an extra bonus, the adoption of the elliptical description is revealed to be a quite realistic assumption. Finally, by comparing these results with a recent determination for a, flat ACDM model using only the SZE/X-ray technique and BAO, we see that the geometry has a very weak influence on H-0 estimates for this combination of data.
Resumo:
We present an analysis of observations made with the Arcminute Microkelvin Imager (AMI) and the CanadaFranceHawaii Telescope (CFHT) of six galaxy clusters in a redshift range of 0.160.41. The cluster gas is modelled using the SunyaevZeldovich (SZ) data provided by AMI, while the total mass is modelled using the lensing data from the CFHT. In this paper, we (i) find very good agreement between SZ measurements (assuming large-scale virialization and a gas-fraction prior) and lensing measurements of the total cluster masses out to r200; (ii) perform the first multiple-component weak-lensing analysis of A115; (iii) confirm the unusual separation between the gas and mass components in A1914 and (iv) jointly analyse the SZ and lensing data for the relaxed cluster A611, confirming our use of a simulation-derived masstemperature relation for parametrizing measurements of the SZ effect.
Resumo:
This paper studies the average control problem of discrete-time Markov Decision Processes (MDPs for short) with general state space, Feller transition probabilities, and possibly non-compact control constraint sets A(x). Two hypotheses are considered: either the cost function c is strictly unbounded or the multifunctions A(r)(x) = {a is an element of A(x) : c(x, a) <= r} are upper-semicontinuous and compact-valued for each real r. For these two cases we provide new results for the existence of a solution to the average-cost optimality equality and inequality using the vanishing discount approach. We also study the convergence of the policy iteration approach under these conditions. It should be pointed out that we do not make any assumptions regarding the convergence and the continuity of the limit function generated by the sequence of relative difference of the alpha-discounted value functions and the Poisson equations as often encountered in the literature. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
For any continuous baseline G distribution [G. M. Cordeiro and M. de Castro, A new family of generalized distributions, J. Statist. Comput. Simul. 81 (2011), pp. 883-898], proposed a new generalized distribution (denoted here with the prefix 'Kw-G'(Kumaraswamy-G)) with two extra positive parameters. They studied some of its mathematical properties and presented special sub-models. We derive a simple representation for the Kw-Gdensity function as a linear combination of exponentiated-G distributions. Some new distributions are proposed as sub-models of this family, for example, the Kw-Chen [Z.A. Chen, A new two-parameter lifetime distribution with bathtub shape or increasing failure rate function, Statist. Probab. Lett. 49 (2000), pp. 155-161], Kw-XTG [M. Xie, Y. Tang, and T.N. Goh, A modified Weibull extension with bathtub failure rate function, Reliab. Eng. System Safety 76 (2002), pp. 279-285] and Kw-Flexible Weibull [M. Bebbington, C. D. Lai, and R. Zitikis, A flexible Weibull extension, Reliab. Eng. System Safety 92 (2007), pp. 719-726]. New properties of the Kw-G distribution are derived which include asymptotes, shapes, moments, moment generating function, mean deviations, Bonferroni and Lorenz curves, reliability, Renyi entropy and Shannon entropy. New properties of the order statistics are investigated. We discuss the estimation of the parameters by maximum likelihood. We provide two applications to real data sets and discuss a bivariate extension of the Kw-G distribution.
Resumo:
We propose a new general Bayesian latent class model for evaluation of the performance of multiple diagnostic tests in situations in which no gold standard test exists based on a computationally intensive approach. The modeling represents an interesting and suitable alternative to models with complex structures that involve the general case of several conditionally independent diagnostic tests, covariates, and strata with different disease prevalences. The technique of stratifying the population according to different disease prevalence rates does not add further marked complexity to the modeling, but it makes the model more flexible and interpretable. To illustrate the general model proposed, we evaluate the performance of six diagnostic screening tests for Chagas disease considering some epidemiological variables. Serology at the time of donation (negative, positive, inconclusive) was considered as a factor of stratification in the model. The general model with stratification of the population performed better in comparison with its concurrents without stratification. The group formed by the testing laboratory Biomanguinhos FIOCRUZ-kit (c-ELISA and rec-ELISA) is the best option in the confirmation process by presenting false-negative rate of 0.0002% from the serial scheme. We are 100% sure that the donor is healthy when these two tests have negative results and he is chagasic when they have positive results.
Resumo:
From microscopic models, a Langevin equation can, in general, be derived only as an approximation. Two possible conditions to validate this approximation are studied. One is, for a linear Langevin equation, that the frequency of the Fourier transform should be close to the natural frequency of the system. The other is by the assumption of "slow" variables. We test this method by comparison with an exactly soluble model and point out its limitations. We base our discussion on two approaches. The first is a direct, elementary treatment of Senitzky. The second is via a generalized Langevin equation as an intermediate step.
Resumo:
Background: The double burden of obesity and underweight is increasing in developing countries and simple methods for the assessment of fat mass in children are needed. Aim: To develop and validate a new anthropometric predication equation for assessment of fat mass in children. Subjects and methods: Body composition was assessed in 145 children aged 9.8 +/- 1.3 (SD) years from Sao Paulo, Brazil using dual energy X-ray absorptiometry (DEXA) and skinfold measurements. The study sample was divided into development and validation sub-sets to develop a new prediction equation for FM (PE). Results: Using multiple linear regression analyses, the best equation for predicting FM (R-2 - 0.77) included body weight, triceps skinfold, height, gender and age as independent variables. When cross-validated, the new PE was valid in this sample (R-2 = 0.80), while previously published equations were not. Conclusion: The PE was more valid for Brazilian children that existing equations, but further studies are needed to assess the validity of this PE in other populations.
Resumo:
Background: Human respiratory syncytial virus (HRSV) is one of the major etiologic agents of respiratory tract infections among children worldwide. Methodology/Principal Findings: Here through a comprehensive analysis of the two major HRSV groups A and B (n = 1983) which comprise of several genotypes, we present a complex pattern of population dynamics of HRSV over a time period of 50 years (1956-2006). Circulation pattern of HRSV revealed a series of expansions and fluctuations of co-circulating lineages with a predominance of HRSVA. Positively selected amino acid substitutions of the G glycoprotein occurred upon population growth of GB3 with a 60-nucleotide insertion (GB3 Insert), while other genotypes acquired substitutions upon both population growth and decrease, thus possibly reflecting a role for immune selected epitopes in linkage to the traced substitution sites that may have important relevance for vaccine design. Analysis evidenced the co-circulation and predominance of distinct HRSV genotypes in Brazil and suggested a year-round presence of the virus. In Brazil, GA2 and GA5 were the main culprits of HRSV outbreaks until recently, when the GB3 Insert became highly prevalent. Using Bayesian methods, we determined the dispersal patterns of genotypes through several inferred migratory routes. Conclusions/Significance: Genotypes spread across continents and between neighboring areas. Crucially, genotypes also remained at any given region for extended periods, independent of seasonal outbreaks possibly maintained by re-infecting the general population.
Resumo:
In savannah and tropical grasslands, which account for 60% of grasslands worldwide, a large share of ecosystem carbon is located below ground due to high root:shoot ratios. Temporal variations in soil CO2 efflux (R-S) were investigated in a grassland of coastal Congo over two years. The objectives were (1) to identify the main factors controlling seasonal variations in R-S and (2) to develop a semi-empirical model describing R-S and including a heterotrophic component (R-H) and an autotrophic component (R-A). Plant above-ground activity was found to exert strong control over soil respiration since 71% of seasonal R-S variability was explained by the quantity of photosynthetically active radiation absorbed (APAR) by the grass canopy. We tested an additive model including a parameter enabling R-S partitioning into R-A and R-H. Assumptions underlying this model were that R-A mainly depended on the amount of photosynthates allocated below ground and that microbial and root activity was mostly controlled by soil temperature and soil moisture. The model provided a reasonably good prediction of seasonal variations in R-S (R-2 = 0.85) which varied between 5.4 mu mol m(-2) s(-1) in the wet season and 0.9 mu mol m(-2) s(-1) at the end of the dry season. The model was subsequently used to obtain annual estimates of R-S, R-A and R-H. In accordance with results reported for other tropical grasslands, we estimated that R-H accounted for 44% of R-S, which represented a flux similar to the amount of carbon brought annually to the soil from below-ground litter production. Overall, this study opens up prospects for simulating the carbon budget of tropical grasslands on a large scale using remotely sensed data. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
In a previous paper, we connected the phenomenological noncommutative inflation of Alexander, Brandenberger and Magueijo [ Phys. Rev. D 67 081301 (2003)] and Koh and Brandenberger [ J. Cosmol. Astropart Phys. 2007 21 ()] with the formal representation theory of groups and algebras and analyzed minimal conditions that the deformed dispersion relation should satisfy in order to lead to a successful inflation. In that paper, we showed that elementary tools of algebra allow a group-like procedure in which even Hopf algebras (roughly the symmetries of noncommutative spaces) could lead to the equation of state of inflationary radiation. Nevertheless, in this paper, we show that there exists a conceptual problem with the kind of representation that leads to the fundamental equations of the model. The problem comes from an incompatibility between one of the minimal conditions for successful inflation (the momentum of individual photons being bounded from above) and the Fock-space structure of the representation which leads to the fundamental inflationary equations of state. We show that the Fock structure, although mathematically allowed, would lead to problems with the overall consistency of physics, like leading to a problematic scattering theory, for example. We suggest replacing the Fock space by one of two possible structures that we propose. One of them relates to the general theory of Hopf algebras (here explained at an elementary level) while the other is based on a representation theorem of von Neumann algebras (a generalization of the Clebsch-Gordan coefficients), a proposal already suggested by us to take into account interactions in the inflationary equation of state.