866 resultados para global industry classification standard
Resumo:
A variable resolution global spectral method is created on the sphere using High resolution Tropical Belt Transformation (HTBT). HTBT belongs to a class of map called reparametrisation maps. HTBT parametrisation of the sphere generates a clustering of points in the entire tropical belt; the density of the grid point distribution decreases smoothly in the domain outside the tropics. This variable resolution method creates finer resolution in the tropics and coarser resolution at the poles. The use of FFT procedure and Gaussian quadrature for the spectral computations retains the numerical efficiency available with the standard global spectral method. Accuracy of the method for meteorological computations are demonstrated by solving Helmholtz equation and non-divergent barotropic vorticity equation on the sphere. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
Seismic site classifications are used to represent site effects for estimating hazard parameters (response spectral ordinates) at the soil surface. Seismic site classifications have generally been carried out using average shear wave velocity and/or standard penetration test n-values of top 30-m soil layers, according to the recommendations of the National Earthquake Hazards Reduction Program (NEHRP) or the International Building Code (IBC). The site classification system in the NEHRP and the IBC is based on the studies carried out in the United States where soil layers extend up to several hundred meters before reaching any distinct soil-bedrock interface and may not be directly applicable to other regions, especially in regions having shallow geological deposits. This paper investigates the influence of rock depth on site classes based on the recommendations of the NEHRP and the IBC. For this study, soil sites having a wide range of average shear wave velocities (or standard penetration test n-values) have been collected from different parts of Australia, China, and India. Shear wave velocities of rock layers underneath soil layers have also been collected at depths from a few meters to 180 m. It is shown that a site classification system based on the top 30-m soil layers often represents stiffer site classes for soil sites having shallow rock depths (rock depths less than 25 m from the soil surface). A new site classification system based on average soil thickness up to engineering bedrock has been proposed herein, which is considered more representative for soil sites in shallow bedrock regions. It has been observed that response spectral ordinates, amplification factors, and site periods estimated using one-dimensional shear wave analysis considering the depth of engineering bedrock are different from those obtained considering top 30-m soil layers.
Resumo:
The problem addressed in this paper is concerned with an important issue faced by any green aware global company to keep its emissions within a prescribed cap. The specific problem is to allocate carbon reductions to its different divisions and supply chain partners in achieving a required target of reductions in its carbon reduction program. The problem becomes a challenging one since the divisions and supply chain partners, being autonomous, may exhibit strategic behavior. We use a standard mechanism design approach to solve this problem. While designing a mechanism for the emission reduction allocation problem, the key properties that need to be satisfied are dominant strategy incentive compatibility (DSIC) (also called strategy-proofness), strict budget balance (SBB), and allocative efficiency (AE). Mechanism design theory has shown that it is not possible to achieve the above three properties simultaneously. In the literature, a mechanism that satisfies DSIC and AE has recently been proposed in this context, keeping the budget imbalance minimal. Motivated by the observation that SBB is an important requirement, in this paper, we propose a mechanism that satisfies DSIC and SBB with slight compromise in allocative efficiency. Our experimentation with a stylized case study shows that the proposed mechanism performs satisfactorily and provides an attractive alternative mechanism for carbon footprint reduction by global companies.
Resumo:
Seismic site characterization is the basic requirement for seismic microzonation and site response studies of an area. Site characterization helps to gauge the average dynamic properties of soil deposits and thus helps to evaluate the surface level response. This paper presents a seismic site characterization of Agartala city, the capital of Tripura state, in the northeast of India. Seismically, Agartala city is situated in the Bengal Basin zone which is classified as a highly active seismic zone, assigned by Indian seismic code BIS-1893, Indian Standard Criteria for Earthquake Resistant Design of Structures, Part-1 General Provisions and Buildings. According to the Bureau of Indian Standards, New Delhi (2002), it is the highest seismic level (zone-V) in the country. The city is very close to the Sylhet fault (Bangladesh) where two major earthquakes (M (w) > 7) have occurred in the past and affected severely this city and the whole of northeast India. In order to perform site response evaluation, a series of geophysical tests at 27 locations were conducted using the multichannel analysis of surface waves (MASW) technique, which is an advanced method for obtaining shear wave velocity (V (s)) profiles from in situ measurements. Similarly, standard penetration test (SPT-N) bore log data sets have been obtained from the Urban Development Department, Govt. of Tripura. In the collected data sets, out of 50 bore logs, 27 were selected which are close to the MASW test locations and used for further study. Both the data sets (V (s) profiles with depth and SPT-N bore log profiles) have been used to calculate the average shear wave velocity (V (s)30) and average SPT-N values for the upper 30 m depth of the subsurface soil profiles. These were used for site classification of the study area recommended by the National Earthquake Hazard Reduction Program (NEHRP) manual. The average V (s)30 and SPT-N classified the study area as seismic site class D and E categories, indicating that the city is susceptible to site effects and liquefaction. Further, the different data set combinations between V (s) and SPT-N (corrected and uncorrected) values have been used to develop site-specific correlation equations by statistical regression, as `V (s)' is a function of SPT-N value (corrected and uncorrected), considered with or without depth. However, after considering the data set pairs, a probabilistic approach has also been presented to develop a correlation using a quantile-quantile (Q-Q) plot. A comparison has also been made with the well known published correlations (for all soils) available in the literature. The present correlations closely agree with the other equations, but, comparatively, the correlation of shear wave velocity with the variation of depth and uncorrected SPT-N values provides a more suitable predicting model. Also the Q-Q plot agrees with all the other equations. In the absence of in situ measurements, the present correlations could be used to measure V (s) profiles of the study area for site response studies.
Resumo:
Clock synchronization in wireless sensor networks (WSNs) assures that sensor nodes have the same reference clock time. This is necessary not only for various WSN applications but also for many system level protocols for WSNs such as MAC protocols, and protocols for sleep scheduling of sensor nodes. Clock value of a node at a particular instant of time depends on its initial value and the frequency of the crystal oscillator used in the sensor node. The frequency of the crystal oscillator varies from node to node, and may also change over time depending upon many factors like temperature, humidity, etc. As a result, clock values of different sensor nodes diverge from each other and also from the real time clock, and hence, there is a requirement for clock synchronization in WSNs. Consequently, many clock synchronization protocols for WSNs have been proposed in the recent past. These protocols differ from each other considerably, and so, there is a need to understand them using a common platform. Towards this goal, this survey paper categorizes the features of clock synchronization protocols for WSNs into three types, viz, structural features, technical features, and global objective features. Each of these categories has different options to further segregate the features for better understanding. The features of clock synchronization protocols that have been used in this survey include all the features which have been used in existing surveys as well as new features such as how the clock value is propagated, when the clock value is propagated, and when the physical clock is updated, which are required for better understanding of the clock synchronization protocols in WSNs in a systematic way. This paper also gives a brief description of a few basic clock synchronization protocols for WSNs, and shows how these protocols fit into the above classification criteria. In addition, the recent clock synchronization protocols for WSNs, which are based on the above basic clock synchronization protocols, are also given alongside the corresponding basic clock synchronization protocols. Indeed, the proposed model for characterizing the clock synchronization protocols in WSNs can be used not only for analyzing the existing protocols but also for designing new clock synchronization protocols. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Understanding technology evolution through periodic landscaping is an important stage of strategic planning in R&D Management. In fields like that of healthcare, where the initial R&D investment is huge and good medical product serve patients better, these activities become crucial. Approximately five percentage of the world population has hearing disabilities. Current hearing aid products meet less than ten percent of the global needs. Patent data and classifications on cochlear implants from 1977-2010, show the landscapes and evolution in the area of such implant. We attempt to highlight emergence and disappearance of patent classes over period of time showing variations in cochlear implant technologies. A network analysis technique is used to explore and capture technology evolution in patent classes showing what emerged or disappeared over time. Dominant classes are identified. The sporadic influence of university research in cochlear implants is also discussed.
Resumo:
In this paper, we search for the regions of the phenomenological minimal supersymmetric standard model (pMSSM) parameter space where one can expect to have moderate Higgs mixing angle (alpha) with relatively light (up to 600 GeV) additional Higgses after satisfying the current LHC data. We perform a global fit analysis using most updated data (till December 2014) from the LHC and Tevatron experiments. The constraints coming from the precision measurements of the rare b-decays B-s -> mu(+)mu(-) and b -> s gamma are also considered. We find that low M-A(less than or similar to 350) and high tan beta(greater than or similar to 25) regions are disfavored by the combined effect of the global analysis and flavor data. However, regions with Higgs mixing angle alpha similar to 0.1-0.8 are still allowed by the current data. We then study the existing direct search bounds on the heavy scalar/pseudoscalar (H/A) and charged Higgs boson (H-+/-) masses and branchings at the LHC. It has been found that regions with low to moderate values of tan beta with light additional Higgses (mass <= 600 GeV) are unconstrained by the data, while the regions with tan beta > 20 are excluded considering the direct search bounds by the LHC-8 data. The possibility to probe the region with tan beta <= 20 at the high luminosity run of LHC are also discussed, giving special attention to the H -> hh, H/A -> t (t) over bar and H/A -> tau(+)tau(-) decay modes.
Resumo:
[ES] Las redes virtuales de fabricación global (RVFGs) están formadas por empresas independientes las cuales establecen entre sí relaciones de tipo horizontal y vertical, pudiendo incluso ser competidores, donde no es necesario mantener internamente grandes recursos fabriles sino gestionar y compartir eficientemente los recursos de la red.
Resumo:
The works presented in this thesis explore a variety of extensions of the standard model of particle physics which are motivated by baryon number (B) and lepton number (L), or some combination thereof. In the standard model, both baryon number and lepton number are accidental global symmetries violated only by non-perturbative weak effects, though the combination B-L is exactly conserved. Although there is currently no evidence for considering these symmetries as fundamental, there are strong phenomenological bounds restricting the existence of new physics violating B or L. In particular, there are strict limits on the lifetime of the proton whose decay would violate baryon number by one unit and lepton number by an odd number of units.
The first paper included in this thesis explores some of the simplest possible extensions of the standard model in which baryon number is violated, but the proton does not decay as a result. The second paper extends this analysis to explore models in which baryon number is conserved, but lepton flavor violation is present. Special attention is given to the processes of μ to e conversion and μ → eγ which are bound by existing experimental limits and relevant to future experiments.
The final two papers explore extensions of the minimal supersymmetric standard model (MSSM) in which both baryon number and lepton number, or the combination B-L, are elevated to the status of being spontaneously broken local symmetries. These models have a rich phenomenology including new collider signatures, stable dark matter candidates, and alternatives to the discrete R-parity symmetry usually built into the MSSM in order to protect against baryon and lepton number violating processes.
Resumo:
162 p.
Resumo:
We provide a comprehensive overview of many recent algorithms for approximate inference in Gaussian process models for probabilistic binary classification. The relationships between several approaches are elucidated theoretically, and the properties of the different algorithms are corroborated by experimental results. We examine both 1) the quality of the predictive distributions and 2) the suitability of the different marginal likelihood approximations for model selection (selecting hyperparameters) and compare to a gold standard based on MCMC. Interestingly, some methods produce good predictive distributions although their marginal likelihood approximations are poor. Strong conclusions are drawn about the methods: The Expectation Propagation algorithm is almost always the method of choice unless the computational budget is very tight. We also extend existing methods in various ways, and provide unifying code implementing all approaches.
Resumo:
O crescente fluxo global de investimentos estrangeiros coloca o tema da regulação dos investimentos estrangeiros no cerne das preocupações do Direito Internacional. Em uma estrutura formal com diversos níveis, o Direito Internacional dos Investimentos passa por constantes readaptações e reconstruções. Diversas alternativas teóricas têm sido propostas para responder aos muitos questionamentos relativos ao futuro do Direito Internacional dos Investimentos. Ao longo das décadas, o Brasil optou por manter-se isolado do regime internacional de regulação de investimentos estrangeiros, de maneira que a questão permaneceu regulada inteiramente por um mosaico normativo disperso entre normas constitucionais e infraconstitucionais. O crescente papel do Brasil como país exportador de capitais especialmente em virtude da expansão da indústria do petróleo e gás levou à recente revisão das diretrizes de política externa em matéria de investimentos estrangeiros. A decisão de negociar acordos internacionais de investimentos pode trazer diversas consequências para o ordenamento jurídico doméstico, dentre as quais se destaca a interferência do padrão de tratamento justo e equitativo no exercício do poder regulatório pelo Estado. A recorrente invocação do padrão de tratamento justo e equitativo contrasta com as incertezas sobre seu conteúdo. Ainda que possa existir uma compatibilidade teórica entre esse padrão de tratamento e o Direito brasileiro, a exposição às interpretações criativas dos tribunais arbitrais pode representar um risco para o Brasil, que deve cuidadosamente avaliar a pertinência de incluir uma cláusula do padrão de tratamento justo e equitativo nos acordos atualmente em negociação.
Resumo:
Estudos têm mostrado que a intensificação do efeito estufa nos últimos anos vem ocasionando um aumento do aquecimento global com reflexos no clima que, por conseguinte, podem comprometer a vida no planeta. Tal intensificação se dá em função do acréscimo na concentração dos gases de efeito estufa proveniente de atividades antrópicas. Esta pesquisa visa quantificar a contribuição das emissões de gases do efeito estufa, lançados por uma empresa do setor metal-mecânico, situada no município do Rio de Janeiro RJ, além de propor cenários nos quais tais emissões podem ser compensadas. A quantificação foi concretizada através da utilização de metodologia elaborada pelo IPCC. A proposta de compensação das emissões se deu através da substituição de combustíveis utilizados em veículos, implantação de produção de energia por sistema fotovoltaico, biodigestão de efluentes domésticos e reflorestamento. A justificativa da pesquisa baseia-se na contribuição para a mitigação da intensificação do efeito estufa, do aquecimento global e das mudanças climáticas, o que conseqüentemente pode colaborar para a conservação da vida na Terra. Do total de emissões lançadas na atmosfera pela empresa em estudo, no ano de 2008, foi obtido um valor de 422 toneladas de CO2 equivalente, sendo 177 toneladas pelo consumo de combustíveis dos meios de transporte, 87 toneladas pelos resíduos gerados, 2,2 toneladas pelos efluentes gerados, 8,81 toneladas por consumo de energia elétrica e 148 toneladas por processos industriais internos. No cenário onde se contempla as medidas mitigadoras, tais emissões são reduzidas a 349 toneladas de CO2 equivalente. Caso seja empregado o reflorestamento como única forma de neutralização total de emissões da empresa em estudo, faz-se necessária a recuperação vegetal de uma área com 1,33 hectares de extensão. Esta alternativa pode se mostrar vantajosa em curto prazo por não acarretar maiores modificações na rotina dos processos industriais. No entanto, caso a Metal Master opte apenas pelo reflorestamento e mantenha o padrão de emissões semelhante ao ano de 2008, ao longo dos anos, será necessária uma vasta extensão de território reflorestado em relação aos valores pré-estabelecidos. Este fato denota a importância de modificações no ambiente industrial, de modo a permitir a neutralização em longo prazo.
Resumo:
The Ecological Society of America and NOAA's Offices of Habitat Conservation and Protected Resources sponsored a workshop to develop a national marine and estuarine ecosystem classification system. Among the 22 people involved were scientists who had developed various regional classification systems and managers from NOAA and other federal agencies who might ultimately use this system for conservation and management. The objectives were to: (1) review existing global and regional classification systems; (2) develop the framework of a national classification system; and (3) propose a plan to expand the framework into a comprehensive classification system. Although there has been progress in the development of marine classifications in recent years, these have been either regionally focused (e.g., Pacific islands) or restricted to specific habitats (e.g., wetlands; deep seafloor). Participants in the workshop looked for commonalties across existing classification systems and tried to link these using broad scale factors important to ecosystem structure and function.