395 resultados para Inefficiency
Resumo:
A new class of low-melting N,N'-dialkylimidazolium salts has been prepared with carborane counterions, some of the most inert and least nucleophilic anions presently known. The cations and anions have been systematically varied with combinations of 1-ethyl-3-methyl-(EMIM+), 1-octyl-3-methyl-(OMIM+), 1-ethyl-2,3-dimethyl- (EDMIM+), and 1-butyl-2,3-dimethyl- (BDMIM+) imidazolium cations and CB11H12-CB11H6Cl6-, and CB11H6Br6- carborane anions to elucidate the factors which affect their melting points. From trends in melting points, which range from 156 degrees C down to 45 degrees C, it is clear that the alkylation pattern on the imidazolium cation is the main determinant of melting point and that packing inefficiency of the cation is the intrinsic cause of low melting points. C-Alkylation of the anion can also contribute to low melting points by the introduction of a further packing inefficiency. Nine of the thirteen salts have been the subject of X-ray crystal structural determination. Notably, crystallographic disorder of the cation is observed in all but one of these salts. It is the most direct evidence to date that packing inefficiency is the major reason unsymmetrical N,N'-dialkylimidazolium salts can be liquids at room temperature.
Resumo:
We investigate the violation of nonlocal realism using entangled coherent states (ECSs) under nonlinear operations and homodyne measurements. We address recently proposed Leggett-type inequalities, including a class of optimized incompatibility inequalities proposed by Branciard et al. [Nature Phys. 4, 681 (2008)], and thoroughly assess the effects of detection inefficiency.
Resumo:
We assess quantum nonlocality of multiparty entangled thermal states by studying, quantitatively, both tripartite and quadripartite states belonging to the Greenberger-Horne-Zeilinger, W, and linear cluster-state classes and showing violation of relevant Bell-like inequalities. We discuss the conditions for maximizing the degree of violation against the local thermal character of the states and the inefficiency of the detection apparatuses. We demonstrate that such classes of multipartite entangled states can be made to last quite significantly, notwithstanding adverse operating conditions. This opens up the possibility for coherent exploitation of multipartite quantum channels made out of entangled thermal states. Our study is accompanied by a detailed description of possible generation schemes for the states analyzed.
Resumo:
There are 424 credit unions in Ireland with assets under their control of €14.3bn and a membership of 2.5m which equates to about 66% of the economically active population, the highest penetration level of any country. That said, the Irish movement sits at a critical development stage, well behind mature markets such as Canada and the US in terms of product provision, technological sophistication, fragmentation of trade bodies and regulatory environment. This study analyses relative cost efficiency or performance of Irish credit unions using the popular frontier approach which measures an entity’s efficiency relative to a frontier of best practice. Parametric techniques are utilised, with variation in inefficiency being attributed to credit union-specific factors. The stochastic cost frontier parameters and the credit-union specific parameters are simultaneously estimated to produce valid statistical inferences. The study finds that the majority of Irish credit unions are not operating at optimal levels. It further highlights the factors which drive efficiency variation across credit unions and they include technological sophistication, ‘sponsor donated’ resources, interest rate differentials and the levels of bad debt written off
Resumo:
Keeping a record of operator experience remains a challenge to operation management and a major source of inefficiency in information management. The objective is to develop a framework that enables an explicit presentation of experience based on information use. A purposive sampling method is used to select four small and medium-sized enterprises as case studies. The unit of analysis is the production process in the machine shop. Data collection is by structured interview, observation and documentation. A comparative case analysis is applied. The findings suggest experience is an accumulation of tacit information feedback, which can be made explicit in information use interoperatability matrix. The matrix is conditioned upon information use typology, which is strategic in waste reduction. The limitations include difficulty of participant anonymity where the organisation nominates a participant. Areas for further research include application of the concepts to knowledge management and shop floor resource management.
Resumo:
We propose a hybrid approach to the experimental assessment of the genuine quantum features of a general system consisting of microscopic and macroscopic parts. We infer entanglement by combining dichotomic measurements on a bidimensional system and phase-space inference through the Wigner distribution associated with the macroscopic component of the state. As a benchmark, we investigate the feasibility of our proposal in a bipartite-entangled state composed of a single-photon and a multiphoton field. Our analysis shows that, under ideal conditions, maximal violation of a Clauser-Horne-Shimony-Holt-based inequality is achievable regardless of the number of photons in the macroscopic part of the state. The difficulty in observing entanglement when losses and detection inefficiency are included can be overcome by using a hybrid entanglement witness that allows efficient correction for losses in the few-photon regime.
Resumo:
This study examines the relative performance of Japanese cooperative banks between 1998 and 2009, explicitly modeling non-performing loans as an undesirable output. Three key findings emerge. First, the sector is characterized by increasing returns to scale which supports the ongoing amalgamation process within the sector. Second, although restricted in product offerings, markets and their membership base, Japanese cooperatives secured both technical progress (a positive shift in the frontier) and a decrease in technical inefficiency (distance from the frontier). Third, the analysis highlighted that regulatory pressure to reduce non-performing loans will have an adverse impact on both output and performance.
Resumo:
Although cartel behaviour is almost universally (and rightly) condemned, it is not clear why cartel participants deserve the full wrath of the criminal law and its associated punishment. To fill this void, I develop a normative (or principled) justification for the criminalisation of conduct characteristic of ‘hard core’ cartels. The paper opens with a brief consideration of the rhetoric commonly used to denounce cartel activity, eg that it ‘steals from’ or ‘robs’ consumers. To put the discussion in context, a brief definition of ‘hard core’ cartel behaviour is provided and the harms associated with this activity are identified. These are: welfare losses in the form of appropriation (from consumer to producer) of consumer surplus, the creation of deadweight loss to the economy, the creation of productive inefficiency (hindering innovation of both products and processes), and the creation of so-called X-inefficiency. As not all activities which cause harm ought to be criminalised, a theory as to why certain harms in a liberal society can be criminalised is developed. It is based on JS Mill's harm to others principle (as refined by Feinberg) and on a choice of social institutions using Rawls's ‘veil of ignorance.’ The theory is centred on the value of individual choice in securing one's own well-being, with the market as an indispensable instrument for this. But as applied to the harm associated with cartel conduct, this theory shows that none of the earlier mentioned problems associated with this activity provide sufficient justification for criminalisation. However, as the harm from hard core cartel activity strikes at an important institution which permits an individual's ability to secure their own well-being in a liberal society, criminalisation of hard core cartel behaviour can have its normative justification on this basis.
Resumo:
In any internal combustion engine, the amount of heat rejected from the engine, and associated systems, is a result of the engine inefficiency. Successfully recovering a small proportion of this energy would therefore substantially improve the fuel economy.
Resumo:
We show that the use of probabilistic noiseless amplification in entangled coherent state-based schemes for the test of quantum nonlocality provides substantial advantages. The threshold amplitude to falsify a Bell-CHSH nonlocality test, in fact, is significantly reduced when amplification is embedded into the test itself. Such a beneficial effect holds also in the presence of detection inefficiency. Our study helps in affirming noiseless amplification as a valuable tool for coherent information processing and the generation of strongly nonclassical states of bosonic systems.
Resumo:
We demonstrate genuine three-mode nonlocality based on phase-space formalism. A Svetlichny-type Bell inequality is formulated in terms of the s-parametrized quasiprobability function. We test such a tool using exemplary forms of three-mode entangled states, identifying the ideal measurement settings required for each state. We thus verify the presence of genuine three-mode nonlocality that cannot be reproduced by local or nonlocal hidden variable models between any two out of three modes. In our results, GHZ- and W-type nonlocality can be fully discriminated. We also study the behavior of genuine tripartite nonlocality under the effects of detection inefficiency and dissipation induced by local thermal environments. Our formalism can be useful to test the sharing of genuine multipartite quantum correlations among the elements of some interesting physical settings, including arrays of trapped ions and intracavity ultracold atoms. DOI: 10.1103/PhysRevA.87.022123
Resumo:
We consider a Bell-like inequality performed using various instances of multiphoton entangled states to demonstrate that losses occurring after the unitary transformations used in the nonlocality test can be counteracted by enhancing the size of such entangled states. In turn, this feature can be used to overcome detection inefficiencies affecting the test itself: a slight increase in the size of such states, pushing them towards a more macroscopic form of entanglement, significantly improves the state robustness against detection inefficiency, thus easing the closing of the detection loophole. Differently, losses before the unitary transformations cause decoherence effects that cannot be compensated using macroscopic entanglement.
Resumo:
The efficiency of generation plants is an important measure for evaluating the operating performance. The objective of this paper is to evaluate electricity power generation by conducting an All-Island-Generator-Efficiency-Study (AIGES) for the Republic of Ireland and Northern Ireland by utilising a Data Envelopment Analysis (DEA) approach. An operational performance efficiency index is defined and pursued for the year 2008. The economic activities of electricity generation units/plants examined in this paper are characterized by numerous input and output indicators. Constant returns to scale (CRS) and variable returns to scale (VRS) type DEA models are employed in the analysis. Also a slacks based analysis indicates the level of inefficiency for each variable examined. The findings from this study provide a general ranking and evaluation but also facilitate various interesting efficiency comparisons between generators by fuel type.
Resumo:
Randomised trials are at the heart of evidence-based healthcare, but the methods and infrastructure for conducting these sometimes complex studies are largely evidence free. Trial Forge (www.trialforge.org) is an initiative that aims to increase the evidence base for trial decision making and, in doing so, to improve trial efficiency.
This paper summarises a one-day workshop held in Edinburgh on 10 July 2014 to discuss Trial Forge and how to advance this initiative. We first outline the problem of inefficiency in randomised trials and go on to describe Trial Forge. We present participants' views on the processes in the life of a randomised trial that should be covered by Trial Forge.
General support existed at the workshop for the Trial Forge approach to increase the evidence base for making randomised trial decisions and for improving trial efficiency. Agreed upon key processes included choosing the right research question; logistical planning for delivery, training of staff, recruitment, and retention; data management and dissemination; and close down. The process of linking to existing initiatives where possible was considered crucial. Trial Forge will not be a guideline or a checklist but a 'go to' website for research on randomised trials methods, with a linked programme of applied methodology research, coupled to an effective evidence-dissemination process. Moreover, it will support an informal network of interested trialists who meet virtually (online) and occasionally in person to build capacity and knowledge in the design and conduct of efficient randomised trials.
Some of the resources invested in randomised trials are wasted because of limited evidence upon which to base many aspects of design, conduct, analysis, and reporting of clinical trials. Trial Forge will help to address this lack of evidence.
Resumo:
How can GPU acceleration be obtained as a service in a cluster? This question has become increasingly significant due to the inefficiency of installing GPUs on all nodes of a cluster. The research reported in this paper is motivated to address the above question by employing rCUDA (remote CUDA), a framework that facilitates Acceleration-as-a-Service (AaaS), such that the nodes of a cluster can request the acceleration of a set of remote GPUs on demand. The rCUDA framework exploits virtualisation and ensures that multiple nodes can share the same GPU. In this paper we test the feasibility of the rCUDA framework on a real-world application employed in the financial risk industry that can benefit from AaaS in the production setting. The results confirm the feasibility of rCUDA and highlight that rCUDA achieves similar performance compared to CUDA, provides consistent results, and more importantly, allows for a single application to benefit from all the GPUs available in the cluster without loosing efficiency.