900 resultados para Operational and network efficiency
Resumo:
Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum's User Level Failure Mitigation proposal has introduced an operation, MPI_Comm_shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI_Comm_shrink operation requires a fault tolerant failure detection and consensus algorithm. This paper presents and compares two novel failure detection and consensus algorithms. The proposed algorithms are based on Gossip protocols and are inherently fault-tolerant and scalable. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that in both algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus.
Resumo:
Today, transparency is hailed as a key to good governance and economic efficiency, with national states implementing new laws to allow citizens access to information. It is therefore paradoxical that, as shown by a series of crises and scandals, modern governments and international agencies frequently have paid only lip-service to such ideals. Since Jeremy Bentham first introduced the concept of transparency into the language in 1789, few societal debates have sparked so much interest within the academic community, and across a variety of disciplines, using different approaches and methodologies. Within these current debates, however, one fact is striking: the lack of historical reflection about the development of the concept of transparency, both as a principle and as applied in practice, prior to its inception. Accordingly, the aim of this special issue is to contribute to historicising the ways in which communication and control over fiscal policy and state finances operated in early modern European polities.
Resumo:
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.
Investigation and optimization of parameters affecting the multiply charged ion yield in AP-MALDI MS
Resumo:
Liquid matrix-assisted laser desorption/ionization (MALDI) allows the generation of predominantly multiply charged ions in atmospheric pressure (AP) MALDI ion sources for mass spectrometry (MS) analysis. The charge state distribution of the generated ions and the efficiency of the ion source in generating such ions crucially depend on the desolvation regime of the MALDI plume after desorption in the AP-tovacuum inlet. Both high temperature and a flow regime with increased residence time of the desorbed plume in the desolvation region promote the generation of multiply charged ions. Without such measures the application of an electric ion extraction field significantly increases the ion signal intensity of singly charged species while the detection of multiply charged species is less dependent on the extraction field. In general, optimization of high temperature application facilitates the predominant formation and detection of multiply charged compared to singly charged ion species. In this study an experimental setup and optimization strategy is described for liquid AP-MALDI MS which improves the ionization effi- ciency of selected ion species up to 14 times. In combination with ion mobility separation, the method allows the detection of multiply charged peptide and protein ions for analyte solution concentrations as low as 2 fmol/lL (0.5 lL, i.e. 1 fmol, deposited on the target) with very low sample consumption in the low nL-range.
Resumo:
Based on a large dataset from eight Asian economies, we test the impact of post-crisis regulatory reforms on the performance of depository institutions in countries at different levels of financial development. We allow for technological heterogeneity and estimate a set of country-level stochastic cost frontiers followed by a deterministic bootstrapped meta-frontier to evaluate cost efficiency and cost technology. Our results support the view that liberalization policies have a positive impact on bank performance, while the reverse is true for prudential regulation policies. The removal of activities restrictions, bank privatization and foreign bank entry have a positive and significant impact on technological progress and cost efficiency. In contrast, prudential policies, which aim to protect the banking sector from excessive risk-taking, tend to adversely affect banks cost efficiency but not cost technology.
Resumo:
Causing civilian casualties during military operations has become a much politicised topic in international relations since the Second World War. Since the last decade of the 20th century, different scholars and political analysts have claimed that human life is valued more and more among the general international community. This argument has led many researchers to assume that democratic culture and traditions, modern ethical and moral issues have created a desire for a world without war or, at least, a demand that contemporary armed conflicts, if unavoidable, at least have to be far less lethal forcing the military to seek new technologies that can minimise civilian casualties and collateral damage. Non-Lethal Weapons (NLW) – weapons that are intended to minimise civilian casualties and collateral damage – are based on the technology that, during the 1990s, was expected to revolutionise the conduct of warfare making it significantly less deadly. The rapid rise of interest in NLW, ignited by the American military twenty five years ago, sparked off an entirely new military, as well as an academic, discourse concerning their potential contribution to military success on the 21st century battlefields. It seems, however, that except for this debate, very little has been done within the military forces themselves. This research suggests that the roots of this situation are much deeper than the simple professional misconduct of the military establishment, or the poor political behaviour of political leaders, who had sent them to fight. Following the story of NLW in the U.S., Russia and Israel this research focuses on the political and cultural aspects that have been supposed to force the military organisations of these countries to adopt new technologies and operational and organisational concepts regarding NLW in an attempt to minimise enemy civilian casualties during their military operations. This research finds that while American, Russian and Israeli national characters are, undoubtedly, products of the unique historical experience of each one of these nations, all of three pay very little regard to foreigners’ lives. Moreover, while it is generally argued that the international political pressure is a crucial factor that leads to the significant reduction of harmed civilians and destroyed civilian infrastructure, the findings of this research suggest that the American, Russian and Israeli governments are well prepared and politically equipped to fend off international criticism. As the analyses of the American, Russian and Israeli cases reveal, the political-military leaderships of these countries have very little external or domestic reasons to minimise enemy civilian casualties through fundamental-revolutionary change in their conduct of war. In other words, this research finds that employment of NLW have failed because the political leadership asks the militaries to reduce the enemy civilian casualties to a politically acceptable level, rather than to the technologically possible minimum; as in the socio-cultural-political context of each country, support for the former appears to be significantly higher than for the latter.
Resumo:
Wireless Sensor Networks (WSNs) have been an exciting topic in recent years. The services offered by a WSN can be classified into three major categories: monitoring, alerting, and information on demand. WSNs have been used for a variety of applications related to the environment (agriculture, water and forest fire detection), the military, buildings, health (elderly people and home monitoring), disaster relief, and area or industrial monitoring. In most WSNs tasks like processing the sensed data, making decisions and generating emergency messages are carried out by a remote server, hence the need for efficient means of transferring data across the network. Because of the range of applications and types of WSN there is a need for different kinds of MAC and routing protocols in order to guarantee delivery of data from the source nodes to the server (or sink). In order to minimize energy consumption and increase performance in areas such as reliability of data delivery, extensive research has been conducted and documented in the literature on designing energy efficient protocols for each individual layer. The most common way to conserve energy in WSNs involves using the MAC layer to put the transceiver and the processor of the sensor node into a low power, sleep state when they are not being used. Hence the energy wasted due to collisions, overhearing and idle listening is reduced. As a result of this strategy for saving energy, the routing protocols need new solutions that take into account the sleep state of some nodes, and which also enable the lifetime of the entire network to be increased by distributing energy usage between nodes over time. This could mean that a combined MAC and routing protocol could significantly improve WSNs because the interaction between the MAC and network layers lets nodes be active at the same time in order to deal with data transmission. In the research presented in this thesis, a cross-layer protocol based on MAC and routing protocols was designed in order to improve the capability of WSNs for a range of different applications. Simulation results, based on a range of realistic scenarios, show that these new protocols improve WSNs by reducing their energy consumption as well as enabling them to support mobile nodes, where necessary. A number of conference and journal papers have been published to disseminate these results for a range of applications.
Resumo:
This study aimed at evaluating the effect of increasing organic loading rates and of enzyme pretreatment on the stability and efficiency of a hybrid upflow anaerobic sludge blanket reactor (UASBh) treating dairy effluent. The UASBh was submitted to the following average organic loading rates (OLR) 0.98 Kg.m(-3).d(-1), 4.58 Kg.m(-3).d(-1), 8.89 Kg.m(-3).d(-1) and 15.73 Kg.m(-3).d(-1), and with the higher value, the reactor was fed with effluent with and without an enzymatic pretreatment to hydrolyze fats. The hydraulic detention time was 24 h, and the temperature was 30 +/- 2 degrees C. The reactor was equipped with a superior foam bed and showed good efficiency and stability until an OLR of 8.89 Kg.m(-3).d(-1). The foam bed was efficient for solid retention and residual volatile acid concentration consumption. The enzymatic pretreatment did not contribute to the process stability, propitiating loss in both biomass and system efficiency. Specific methanogenic activity tests indicated the presence of inhibition after the sludge had been submitted to the pretreated effluent It was concluded that continuous exposure to the hydrolysis products or to the enzyme caused a dramatic drop in the efficiency and stability of the process, and the single exposure of the biomass to this condition did not inhibit methane formation. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Hepatitis C virus (HCV), exhibits considerable genetic diversity, but presents a relatively well conserved 5 ` noncoding region (5 ` NCR) among all genotypes. In this study, the structural features and translational efficiency of the HCV 5 ` NCR sequences were analyzed using the programs RNAfold, RNAshapes and RNApdist and with a bicistronic dual luciferase expression system, respectively. RNA structure prediction software indicated that base substitutions will alter potentially the 5 ` NCR structure. The heterogeneous sequence observed on 5 ` NCR led to important changes in their translation efficiency in different cell culture lines. Interactions of the viral RNA with cellular transacting factors may vary according to the cell type and viral genome polymorphisms that may result in the translational efficiency observed. J. Med. Virol. 81: 1212-1219, 2009. (C) 2009 Wiley-Liss, Inc.
Resumo:
Cytochrome c exhibits two positively charged sites: site A containing lysine residues with high pK(a) values and site L containing ionizable groups with pK(aobs),values around 7.0. This protein feature implies that cytochrome c can participate in the fusion of mitochondria and have its detachment from the inner membrane regulated by cell acidosis and alkalosis. In this study, We demonstrated that both horse and tuna cytochrome c exhibited two types of binding to inner mitochondrial membranes that contributed to respiration: a high-affinity and low-efficiency pi-I-independent binding (microscopic dissociation constant K(sapp2), similar to 10 nM) and a low-affinity and high-efficiency pH-dependent binding that for horse cytochrome c had a pK(a) of similar to 6.7. For tuna cytochrome c (Lys22 and His33 replaced with Asn and Trp, respectively), the effect of pH on K(sapp1), was less striking than for the horse heme protein, and both tuna and horse cytochrome c had closed K(sapp1) values at pH 7.2 and 6.2, respectively. Recombinant mutated cytochrome c H26N and H33N also restored the respiration of the cytochrome c-depleted mitoplast in a pH-dependent manner. Consistently, the detachment of cytochrome c from nondepleted mitoplasts was favored by alkalinization, suggesting that site Lionization influences the participation of cytochrome c in the respiratory chain and apoptosis.
Resumo:
This research is based on consumer complaints with respect to recently purchased consumer electronics. This research document will investigate the instances of development and device management as a tool used to aid consumer and manage consumer’s mobile products in order to resolve issues in or before the consumers is aware one exists. The problem at the present time is that mobile devices are becoming very advanced pieces of technology, and not all manufacturers and network providers have kept up the support element of End users. As such, the subject of the research is to investigate how device management could possibly be used as a method to promote research and development of mobile devices, and provide a better experience for the consumer. The wireless world is becoming increasingly complex as revenue opportunities are driven by new and innovative data services. We can no longer expect the customer to have the knowledge or ability to configure their own device. Device Management platforms can address the challenges of device configuration and support through new enabling technologies. Leveraging these technologies will allow a network operator to reduce the cost of subscriber ownership, drive increased ARPU (Average Revenue per User) by removing barriers to adoption, reduce churn by improving the customer experience and increase customer loyalty. DM technologies provide a flexible and powerful management method but are managing the same device features that have historically been configured manually through call centers or by the end user making changes directly on the device. For this reason DM technologies must be treated as part of a wider support solution. The traditional requirement for discovery, fault finding, troubleshooting and diagnosis are still as relevant with DM as they are in the current human support environment yet the current generation of solutions do little to address this problem. In the deployment of an effective Device Management solution the network operator must consider the integration of the DM platform, interfacing with many areas of the business, supported by knowledge of the relationship between devices, applications, solutions and services maintained on an ongoing basis. Complementing the DM solution with published device information, setup guides, training material and web based tools will ensure the quality of the customer experience, ensuring that problems are completely resolved, driving data usage by focusing customer education on the use of the wireless service In this way device management becomes a tool used both internally within the network or device vendor and by the customer themselves, with each user empowered to effectively manage the device without any prior knowledge or experience, confident that changes they apply will be relevant, accurate, stable and compatible. The value offered by an effective DM solution with an expert knowledge service will become a significant differentiator for the network operator in an ever competitive wireless market. This research document is intended to highlight some of the issues the industry faces as device management technologies become more prevalent, and offers some potential solutions to simplify the increasingly complex task of managing devices on the network, where device management can be used as a tool to aid customer relations and manage customer’s mobile products in order to resolve issues before the user is aware one exists. The research is broken down into the following, Customer Relationship Management, Device management, the role of knowledge with the DM, Companies that have successfully implemented device management, and the future of device management and CRM. And it also consists of questionnaires aimed at technical support agents and mobile device users. Interview was carried out with CRM managers within support centre to further the evidence gathered. To conclude, the document is to consider the advantages and disadvantages of device management and attempt to determine the influence it will have over customer support centre, and what methods could be used to implement it.
Resumo:
A structure transports system is very necessary to attending many reasons of urban dislocations. This structuration is composed by many complex elements, as such as refer to: physical elements (rodoviary network, vehicles, garages, transboarding terminals), human elements (operational and administrative workmanship) and the institutional elements (management and fiscalization). This last one, especially, is our approach in this study, focusing the functions as the fiscalization of public organization. The purpose of this study was identifies how the public organization develops the management the system of public transport for urban intercity users and how can contributes for users satisfaction, feeding back avaluations, even, giving new direction or reavaluation of the fiscalization function. The methodology sdopted to get expected solutions is exploratory survey, due the unidentified any studies with the same focus or approaching to the citizens expectatives solutions by public organism manager point of view. Also it was realized an explanation survey to complement the study giving wide comprehension for the other elements used by organization to obtain success in the relationship with users. A field survey and bibliographical was realized thru documental investigation to get informations about necessary foundamentation based conclusion.
Resumo:
This paper presents semiparametric estimators of changes in inequality measures of a dependent variable distribution taking into account the possible changes on the distributions of covariates. When we do not impose parametric assumptions on the conditional distribution of the dependent variable given covariates, this problem becomes equivalent to estimation of distributional impacts of interventions (treatment) when selection to the program is based on observable characteristics. The distributional impacts of a treatment will be calculated as differences in inequality measures of the potential outcomes of receiving and not receiving the treatment. These differences are called here Inequality Treatment Effects (ITE). The estimation procedure involves a first non-parametric step in which the probability of receiving treatment given covariates, the propensity-score, is estimated. Using the inverse probability weighting method to estimate parameters of the marginal distribution of potential outcomes, in the second step weighted sample versions of inequality measures are computed. Root-N consistency, asymptotic normality and semiparametric efficiency are shown for the semiparametric estimators proposed. A Monte Carlo exercise is performed to investigate the behavior in finite samples of the estimator derived in the paper. We also apply our method to the evaluation of a job training program.
Resumo:
A implantação e expansão do modelo operacional de “serviços compartilhados” ocupam um lugar de destaque na atual estratégia de muitas empresas multinacionais, o que demonstra seu valor e sucesso como mecanismo de redução de desperdícios e de aumento da eficiência e da eficácia na execução das atividades organizacionais. Esta dissertação tem o objetivo de investigar problemas que podem comprometer o sucesso deste modelo partindo de alguma hipóteses levantadas pelo autor com base em sua observações e vivência profissional. Para atender ao objetivo deste trabalho, foi realizada uma revisão da literatura sobre os temas estratégia e relação matriz-subsidiária, a fim de compreender os diferentes fatores que influenciam os papéis desempenhados pelas subsidiárias com relação às matrizes. Estes temas foram selecionados em virtude das revelações do campo. Com base neste referencial teórico, foram selecionadas algumas tipologias como critério de análise para a investigação empírica das práticas na empresa selecionada. A metodologia utilizada engloba um estudo de caso único. É feita uma análise dos resultados encontrados na pesquisa baseando-se no referencial teórico selecionado na tipologia. Com base no caso estudado, é possível afirmar que a relação matriz-subsidiária impacta diretamente no sucesso deste modelo em empresas multinacionais.
Resumo:
O fenômeno "Born global" refere-se a empresas que consideram o mercado global como seu contexto natural e que iniciam seu processo de internacionalização muito cedo após sua criação. As teorias tradicionais como o modelo de Uppsala não conseguem explicar este processo. Portanto, outras teorias têm surgido, como a perspectiva de redes. Existem alguns estudos relacionados a esta área, principalmente realizados em países desenvolvidos com pequenos mercados e economias abertas. No entanto, poucos estudos têm sido feitos em economias em desenvolvimento. Além disso, o número de pesquisas quanto à escolha do modo de entrada e seleção de mercados das empresas “born global” é bastante limitado. Consequentemente, este estudo pretende descrever os principais fatores que influenciam a escolha do modo de entrada e seleção de mercados das empresas, de economias em desenvolvimento, nascidas globais. O foco da pesquisa é a indústria de software e um estudo de casos múltiplo foi realizado com três empresas no Equador. A metodologia incluiu entrevistas com fundadores, bem como a coleta de dados secundários. Com base na evidência empírica, verificou-se que os principais fatores que influenciam a escolha do modo de entrada são as restrições financeiras, as receitas esperadas, a velocidade de internacionalização, mercados nicho e a experiência empresarial anterior dos fundadores. Por outro lado, a seleção de mercado é influenciada por semelhanças de língua e cultura, mercados nicho e relações em rede.