846 resultados para Efficient market theory
The dual nature of information systems in enabling a new wave of hardware ventures: Towards a theory
Resumo:
Hardware ventures are emerging entrepreneurial firms that create new market offerings based on development of digital devices. These ventures are important elements in the global economy but have not yet received much attention in the literature. Our interest in examining hardware ventures is specifically in the role that information system (IS) resources play in enabling them. We ask how the role of IS resources for hardware ventures can be conceptualized and develop a framework for assessment. Our framework builds on the distinction of operand and operant resources and distinguishes between two key lifecycle stages of hardware ventures: start-up and growth. We show how this framework can be used to discuss the role, nature, and use of IS for hardware ventures and outline empirical research strategies that flow from it. Our work contributes to broadening and enriching the IS field by drawing attention to its role in significant and novel phenomena.
Resumo:
Efficient yet inexpensive electrocatalysts for oxygen reduction reaction (ORR) are an essential component of renewable energy devices, such as fuel cells and metal-air batteries. We herein interleaved novel Co3O4 nanosheets with graphene to develop a first ever sheet-on-sheet heterostructured electrocatalyst for ORR, whose electrocatalytic activity outperformed the state-of-the-art commercial Pt/C with exceptional durability in alkaline solution. The composite demonstrates the highest activity of all the nonprecious metal electrocatalysts, such as those derived from Co3O4 nanoparticle/nitrogen-doped graphene hybrids and carbon nanotube/nanoparticle composites. Density functional theory (DFT) calculations indicated that the outstanding performance originated from the significant charge transfer from graphene to Co3O4 nanosheets promoting the electron transport through the whole structure. Theoretical calculations revealed that the enhanced stability can be ascribed to the strong interaction generated between both types of sheets.
Resumo:
We report herein highly efficient photocatalysts comprising supported nanoparticles (NPs) of gold (Au) and palladium (Pd) alloys, which utilize visible light to catalyse the Suzuki cross-coupling reactions at ambient temperature. The alloy NPs strongly absorb visible light, energizing the conduction electrons of NPs which produce highly energetic electrons at the surface sites. The surface of the energized NPs activates the substrates and these particles exhibit good activity on a range of typical Suzuki reaction combinations. The photocatalytic efficiencies strongly depend on the Au:Pd ratio of the alloy NPs, irradiation light intensity and wavelength. The results show that the alloy nanoparticles efficiently couple thermal and photonic energy sources to drive Suzuki reactions. Results of the density functional theory (DFT) calculations indicate that transfer of the light-excited electrons from the nanoparticle surface to the reactant molecules adsorbed on the nanoparticle surface activates the reactants. The knowledge acquired in this study may inspire further studies of new efficient photocatalysts and a wide range of organic syntheses driven by sunlight.
Resumo:
This chapter aims to provide a comprehensive understanding of the theory, regulations and practice of corporate social responsibility (CSR) assurance in China. Built on stakeholder and related theories, it employs a demand-and-supply analytical framework to illustrate the development and current status of China’s CSR assurance market. It finds that government agencies, stock exchanges, accounting standard setters and industrial associations have collectively shaped the current regulatory framework on CSR reporting and assurance in China. Regarding demand, differences in the social and legal environments across such a large country influence the regional development of CSR assurance. Industries under intensive CSR regulations and/or social reporting pressure—for example, the finance, aviation and mining industries—more actively achieve CSR report assurance. Regarding supply, the CSR assurance market in China is shared by accounting firms and professional certification bodies. Different assurance standards adopted by the two streams of assurance providers have different foci, potentially leading to different assurance coverage and emphases.
Resumo:
Computation of the dependency basis is the fundamental step in solving the membership problem for functional dependencies (FDs) and multivalued dependencies (MVDs) in relational database theory. We examine this problem from an algebraic perspective. We introduce the notion of the inference basis of a set M of MVDs and show that it contains the maximum information about the logical consequences of M. We propose the notion of a dependency-lattice and develop an algebraic characterization of inference basis using simple notions from lattice theory. We also establish several interesting properties of dependency-lattices related to the implication problem. Founded on our characterization, we synthesize efficient algorithms for (a): computing the inference basis of a given set M of MVDs; (b): computing the dependency basis of a given attribute set w.r.t. M; and (c): solving the membership problem for MVDs. We also show that our results naturally extend to incorporate FDs also in a way that enables the solution of the membership problem for both FDs and MVDs put together. We finally show that our algorithms are more efficient than existing ones, when used to solve what we term the ‘generalized membership problem’.
Resumo:
This dissertation is a theoretical study of finite-state based grammars used in natural language processing. The study is concerned with certain varieties of finite-state intersection grammars (FSIG) whose parsers define regular relations between surface strings and annotated surface strings. The study focuses on the following three aspects of FSIGs: (i) Computational complexity of grammars under limiting parameters In the study, the computational complexity in practical natural language processing is approached through performance-motivated parameters on structural complexity. Each parameter splits some grammars in the Chomsky hierarchy into an infinite set of subset approximations. When the approximations are regular, they seem to fall into the logarithmic-time hierarchyand the dot-depth hierarchy of star-free regular languages. This theoretical result is important and possibly relevant to grammar induction. (ii) Linguistically applicable structural representations Related to the linguistically applicable representations of syntactic entities, the study contains new bracketing schemes that cope with dependency links, left- and right branching, crossing dependencies and spurious ambiguity. New grammar representations that resemble the Chomsky-Schützenberger representation of context-free languages are presented in the study, and they include, in particular, representations for mildly context-sensitive non-projective dependency grammars whose performance-motivated approximations are linear time parseable. (iii) Compilation and simplification of linguistic constraints Efficient compilation methods for certain regular operations such as generalized restriction are presented. These include an elegant algorithm that has already been adopted as the approach in a proprietary finite-state tool. In addition to the compilation methods, an approach to on-the-fly simplifications of finite-state representations for parse forests is sketched. These findings are tightly coupled with each other under the theme of locality. I argue that the findings help us to develop better, linguistically oriented formalisms for finite-state parsing and to develop more efficient parsers for natural language processing. Avainsanat: syntactic parsing, finite-state automata, dependency grammar, first-order logic, linguistic performance, star-free regular approximations, mildly context-sensitive grammars
Resumo:
The test based on comparison of the characteristic coefficients of the adjancency matrices of the corresponding graphs for detection of isomorphism in kinematic chains has been shown to fail in the case of two pairs of ten-link, simple-jointed chains, one pair corresponding to single-freedom chains and the other pair corresponding to three-freedom chains. An assessment of the merits and demerits of available methods for detection of isomorphism in graphs and kinematic chains is presented, keeping in view the suitability of the methods for use in computerized structural synthesis of kinematic chains. A new test based on the characteristic coefficients of the “degree” matrix of the corresponding graph is proposed for detection of isomorphism in kinematic chains. The new test is found to be successful in the case of a number of examples of graphs where the test based on characteristic coefficients of adjancency matrix fails. It has also been found to be successful in distinguishing the structures of all known simple-jointed kinematic chains in the categories of (a) single-freedom chains with up to 10 links, (b) two-freedom chains with up to 9 links and (c) three-freedom chains with up to 10 links.
Resumo:
A computationally efficient agglomerative clustering algorithm based on multilevel theory is presented. Here, the data set is divided randomly into a number of partitions. The samples of each such partition are clustered separately using hierarchical agglomerative clustering algorithm to form sub-clusters. These are merged at higher levels to get the final classification. This algorithm leads to the same classification as that of hierarchical agglomerative clustering algorithm when the clusters are well separated. The advantages of this algorithm are short run time and small storage requirement. It is observed that the savings, in storage space and computation time, increase nonlinearly with the sample size.
Resumo:
The objective of this thesis is to find out how dominant firms in a liberalised electricity market will react when they face an increase in the level of costs due to emissions trading, and how this will effect the price of electricity. The Nordic electricity market is chosen as the setting in which to examine the question, since recent studies on the subject suggest that interaction between electricity markets and emissions trading is very much dependent on conditions specific to each market area. There is reason to believe that imperfect competition prevails in the Nordic market, thus the issue is approached through the theory of oligopolistic competition. The generation capacity available at the market, marginal cost of electricity production and seasonal levels of demand form the data based on which the dominant firms are modelled using the Cournot model of competition. The calculations are made for two levels of demand, high and low, and with several values of demand elasticity. The producers are first modelled under no carbon costs and then by adding the cost of carbon dioxide at 20€/t to those technologies subject to carbon regulation. In all cases the situation under perfect competition is determined as a comparison point for the results of the Cournot game. The results imply that the potential for market power does exist on the Nordic market, but the possibility for exercising market power depends on the demand level. In season of high demand the dominant firms may raise the price significantly above competitive levels, and the situation is aggravated when the cost of carbon dioixide is accounted for. Under low demand leves there is no difference between perfect and imperfect competition. The results are highly dependent on the price elasticity of demand.
Resumo:
This chapter challenges current approaches to defining the context and process of entrepreneurship education. In modeling our classrooms as a microcosm of the world our current and future students will enter, this chapter brings to life (and celebrates) the everpresent diversity found within. The chapter attempts to make an important (and unique) contribution to the field of enterprise education by illustrating how we can determine the success of (1) our efforts as educators, (2) our students, and (3) our various teaching methods. The chapter is based on two specific premises, the most fundamental being the assertion that the performance of student, educator and institution can only be accounted for by accepting the nature of the dialogic relationship between the student and educator and between the educator and institution. A second premise is that at any moment in time, the educator can be assessed as being either efficient or inefficient, due to the presence of observable heterogeneity in the learning environment that produces differential learning outcomes. This chapter claims that understanding and appreciating the nature of heterogeneity in our classrooms provides an avenue for improvement in all facets of learning and teaching. To explain this claim, Haskell’s (1949) theory of coaction is resurrected to provide a lens through which all manner of interaction occurring within all forms of educational contexts can be explained. Haskell (1949) asserted that coaction theory had three salient features.
Resumo:
Analyzing statistical dependencies is a fundamental problem in all empirical science. Dependencies help us understand causes and effects, create new scientific theories, and invent cures to problems. Nowadays, large amounts of data is available, but efficient computational tools for analyzing the data are missing. In this research, we develop efficient algorithms for a commonly occurring search problem - searching for the statistically most significant dependency rules in binary data. We consider dependency rules of the form X->A or X->not A, where X is a set of positive-valued attributes and A is a single attribute. Such rules describe which factors either increase or decrease the probability of the consequent A. A classical example are genetic and environmental factors, which can either cause or prevent a disease. The emphasis in this research is that the discovered dependencies should be genuine - i.e. they should also hold in future data. This is an important distinction from the traditional association rules, which - in spite of their name and a similar appearance to dependency rules - do not necessarily represent statistical dependencies at all or represent only spurious connections, which occur by chance. Therefore, the principal objective is to search for the rules with statistical significance measures. Another important objective is to search for only non-redundant rules, which express the real causes of dependence, without any occasional extra factors. The extra factors do not add any new information on the dependence, but can only blur it and make it less accurate in future data. The problem is computationally very demanding, because the number of all possible rules increases exponentially with the number of attributes. In addition, neither the statistical dependency nor the statistical significance are monotonic properties, which means that the traditional pruning techniques do not work. As a solution, we first derive the mathematical basis for pruning the search space with any well-behaving statistical significance measures. The mathematical theory is complemented by a new algorithmic invention, which enables an efficient search without any heuristic restrictions. The resulting algorithm can be used to search for both positive and negative dependencies with any commonly used statistical measures, like Fisher's exact test, the chi-squared measure, mutual information, and z scores. According to our experiments, the algorithm is well-scalable, especially with Fisher's exact test. It can easily handle even the densest data sets with 10000-20000 attributes. Still, the results are globally optimal, which is a remarkable improvement over the existing solutions. In practice, this means that the user does not have to worry whether the dependencies hold in future data or if the data still contains better, but undiscovered dependencies.
Resumo:
The National Energy Efficient Building Project (NEEBP) Phase One report, published in December 2014, investigated “process issues and systemic failures” in the administration of the energy performance requirements in the National Construction Code. It found that most stakeholders believed that under-compliance with these requirements is widespread across Australia, with similar issues being reported in all states and territories. The report found that many different factors were contributing to this outcome and, as a result, many recommendations were offered that together would be expected to remedy the systemic issues reported. To follow up on this Phase 1 report, three additional projects were commissioned as part of Phase 2 of the overall NEEBP project. This Report deals with the development and piloting of an Electronic Building Passport (EBP) tool – a project undertaken jointly by pitt&sherry and a team at the Queensland University of Technology (QUT) led by Dr Wendy Miller. The other Phase 2 projects cover audits of Class 1 buildings and issues relating to building alterations and additions. The passport concept aims to provide all stakeholders with (controlled) access to the key documentation and information that they need to verify the energy performance of buildings. This trial project deals with residential buildings but in principle could apply to any building type. Nine councils were recruited to help develop and test a pilot electronic building passport tool. The participation of these councils – across all states – enabled an assessment of the extent to which these councils are currently utilising documentation; to track the compliance of residential buildings with the energy performance requirements in the National Construction Code (NCC). Overall we found that none of the participating councils are currently compiling all of the energy performance-related documentation that would demonstrate code compliance. The key reasons for this include: a major lack of clarity on precisely what documentation should be collected; cost and budget pressures; low public/stakeholder demand for the documentation; and a pragmatic judgement that non-compliance with any regulated documentation requirements represents a relatively low risk for them. Some councils reported producing documentation, such as certificates of final completion, only on demand, for example. Only three of the nine council participants reported regularly conducting compliance assessments or audits utilising this documentation and/or inspections. Overall we formed the view that documentation and information tracking processes operating within the building standards and compliance system are not working to assure compliance with the Code’s energy performance requirements. In other words the Code, and its implementation under state and territory regulatory processes, is falling short as a ‘quality assurance’ system for consumers. As a result it is likely that the new housing stock is under-performing relative to policy expectations, consuming unnecessary amounts of energy, imposing unnecessarily high energy bills on occupants, and generating unnecessary greenhouse gas emissions. At the same time, Councils noted that the demand for documentation relating to building energy performance was low. All the participant councils in the EBP pilot agreed that documentation and information processes need to work more effectively if the potential regulatory and market drivers towards energy efficient homes are to be harnessed. These findings are fully consistent with the Phase 1 NEEBP report. It was also agreed that an EBP system could potentially play an important role in improving documentation and information processes. However, only one of the participant councils indicated that they might adopt such a system on a voluntary basis. The majority felt that such a system would only be taken up if it were: - A nationally agreed system, imposed as a mandatory requirement under state or national regulation; - Capable of being used by multiple parties including councils, private certifiers, building regulators, builders and energy assessors in particular; and - Fully integrated into their existing document management systems, or at least seamlessly compatible rather than a separate, unlinked tool. Further, we note that the value of an EBP in capturing statistical information relating to the energy performance of buildings would be much greater if an EBP were adopted on a nationally consistent basis. Councils were clear that a key impediment to the take up of an EBP system is that they are facing very considerable budget and staffing challenges. They report that they are often unable to meet all community demands from the resources available to them. Therefore they are unlikely to provide resources to support the roll out of an EBP system on a voluntary basis. Overall, we conclude from this pilot that the public good would be well served if the Australian, state and territory governments continued to develop and implement an Electronic Building Passport system in a cost-efficient and effective manner. This development should occur with detailed input from building regulators, the Australian Building Codes Board (ABCB), councils and private certifiers in the first instance. This report provides a suite of recommendations (Section 7.2) designed to advance the development and guide the implementation of a national EBP system.
Resumo:
The dissertation consists of an introductory chapter and three essays that apply search-matching theory to study the interaction of labor market frictions, technological change and macroeconomic fluctuations. The first essay studies the impact of capital-embodied growth on equilibrium unemployment by extending a vintage capital/search model to incorporate vintage human capital. In addition to the capital obsolescence (or creative destruction) effect that tends to raise unemployment, vintage human capital introduces a skill obsolescence effect of faster growth that has the opposite sign. Faster skill obsolescence reduces the value of unemployment, hence wages and leads to more job creation and less job destruction, unambiguously reducing unemployment. The second essay studies the effect of skill biased technological change on skill mismatch and the allocation of workers and firms in the labor market. By allowing workers to invest in education, we extend a matching model with two-sided heterogeneity to incorporate an endogenous distribution of high and low skill workers. We consider various possibilities for the cost of acquiring skills and show that while unemployment increases in most scenarios, the effect on the distribution of vacancy and worker types varies according to the structure of skill costs. When the model is extended to incorporate endogenous labor market participation, we show that the unemployment rate becomes less informative of the state of the labor market as the participation margin absorbs employment effects. The third essay studies the effects of labor taxes on equilibrium labor market outcomes and macroeconomic dynamics in a New Keynesian model with matching frictions. Three policy instruments are considered: a marginal tax and a tax subsidy to produce tax progression schemes, and a replacement ratio to account for variability in outside options. In equilibrium, the marginal tax rate and replacement ratio dampen economic activity whereas tax subsidies boost the economy. The marginal tax rate and replacement ratio amplify shock responses whereas employment subsidies weaken them. The tax instruments affect the degree to which the wage absorbs shocks. We show that increasing tax progression when taxation is initially progressive is harmful for steady state employment and output, and amplifies the sensitivity of macroeconomic variables to shocks. When taxation is initially proportional, increasing progression is beneficial for output and employment and dampens shock responses.
Resumo:
Reducing tariffs and increasing consumption taxes is a standard IMF advice to countries that want to open up their economy without hurting government finances. Indeed, theoretical analysis of such a tariff–tax reform shows an unambiguous increase in welfare and government revenues. The present paper examines whether the country that implements such a reform ends up opening up its markets to international trade, i.e. whether its market access improves. It is shown that this is not necessarily so. We also show that, comparing to the reform of only tariffs, the tariff–tax reform is a less efficient proposal to follow both as far as it concerns market access and welfare.
Resumo:
Reducing carbon dioxide (CO2) to hydrocarbon fuel with solar energy is significant for high-density solar energy storage and carbon balance. In this work, single palladium/platinum (Pd/Pt) atoms supported on graphitic carbon nitride (g-C3N4), i.e. Pd/g-C3N4 and Pt/g-C3N4, acting as photocatalysts for CO2 reduction were investigated by density function theory (DFT) calcu-lations for the first time. During CO2 reduction, the individual metal atoms function as the active sites, while g-C3N4 provides the source of hydrogen (H*) from hydrogen evolution reaction. The complete, as-designed photocatalysts exhibit excellent activity in CO2 reduction. HCOOH is the preferred product of CO2 reduction on the Pd/g-C3N4 catalyst with a rate-determining barrier of 0.66 eV, while the Pt/g-C3N4 catalyst prefers to reduce CO2 to CH4 with a rate-determining barrier of 1.16 eV. In addition, depositing atom catalysts on g-C3N4 significantly enhances the visible light absorption, rendering them ideal for visible light reduction of CO2. Our findings open a new avenue of CO2 reduction for renewable energy supply.