935 resultados para Power to decide process
Resumo:
Design of casting entails the knowledge of various interacting factors that are unique to casting process, and, quite often, product designers do not have the required foundry-specific knowledge. Casting designers normally have to liaise with casting experts in order to ensure the product designed is castable and the optimum casting method is selected. This two-way communication results in long design lead times, and lack of it can easily lead to incorrect casting design. A computer-based system at the discretion of a design engineer can, however, alleviate this problem and enhance the prospect of casting design for manufacture. This paper proposes a knowledge-based expert system approach to assist casting product designers in selecting the most suitable casting process for specified casting design requirements, during the design phase of product manufacture. A prototype expert system has been developed, based on production rules knowledge representation technique. The proposed system consists of a number of autonomous but interconnected levels, each dealing with a specific group of factors, namely, casting alloy, shape and complexity parameters, accuracy requirements and comparative costs, based on production quantity. The user interface has been so designed to allow the user to have a clear view of how casting design parameters affect the selection of various casting processes at each level; if necessary, the appropriate design changes can be made to facilitate the castability of the product being designed, or to suit the design to a preferred casting method.
Resumo:
ε-caprolactam is a monomer of high value. Therefore, the chemical reutilization of polyamide 6 containing carpets for ε-caprolactam recovery offers some economic benefit and is performed on a technical scale with the help of the Zimmer-process. By this process polyamide 6 is depolymerized with steam and phosphoric acid. An alternative to this process is the thermal depolymerization - catalyzed or non-catalyzed. To investigate this alternative in more detail, the formal kinetic parameters of (i) the thermal depolymerization of polyamide 6, (ii) the thermal depolymerization in presence of sodium/potassium hydoxide, and (iii) the thermal depolymerization in presence of phosphoric acid are determined in this work. Based on the kinetics of the catalyzed or non-catalyzed depolymerization a stepwise pyrolysis procedure is designed by which the formation of ε-caprolactam from polyamide 6 can be separated from the formation of other pyrolysis products. © 2001 Elsevier Science B.V.
Resumo:
The research presented in this thesis was developed as part of DIBANET, an EC funded project aiming to develop an energetically self-sustainable process for the production of diesel miscible biofuels (i.e. ethyl levulinate) via acid hydrolysis of selected biomass feedstocks. Three thermal conversion technologies, pyrolysis, gasification and combustion, were evaluated in the present work with the aim of recovering the energy stored in the acid hydrolysis solid residue (AHR). Mainly consisting of lignin and humins, the AHR can contain up to 80% of the energy in the original feedstock. Pyrolysis of AHR proved unsatisfactory, so attention focussed on gasification and combustion with the aim of producing heat and/or power to supply the energy demanded by the ethyl levulinate production process. A thermal processing rig consisting on a Laminar Entrained Flow Reactor (LEFR) equipped with solid and liquid collection and online gas analysis systems was designed and built to explore pyrolysis, gasification and air-blown combustion of AHR. Maximum liquid yield for pyrolysis of AHR was 30wt% with volatile conversion of 80%. Gas yield for AHR gasification was 78wt%, with 8wt% tar yields and conversion of volatiles close to 100%. 90wt% of the AHR was transformed into gas by combustion, with volatile conversions above 90%. 5volO2%-95vol%N2 gasification resulted in a nitrogen diluted, low heating value gas (2MJ/m3). Steam and oxygen-blown gasification of AHR were additionally investigated in a batch gasifier at KTH in Sweden. Steam promoted the formation of hydrogen (25vol%) and methane (14vol%) improving the gas heating value to 10MJ/m3, below the typical for steam gasification due to equipment limitations. Arrhenius kinetic parameters were calculated using data collected with the LEFR to provide reaction rate information for process design and optimisation. Activation energy (EA) and pre-exponential factor (ko in s-1) for pyrolysis (EA=80kJ/mol, lnko=14), gasification (EA=69kJ/mol, lnko=13) and combustion (EA=42kJ/mol, lnko=8) were calculated after linearly fitting the data using the random pore model. Kinetic parameters for pyrolysis and combustion were also determined by dynamic thermogravimetric analysis (TGA), including studies of the original biomass feedstocks for comparison. Results obtained by differential and integral isoconversional methods for activation energy determination were compared. Activation energy calculated by the Vyazovkin method was 103-204kJ/mol for pyrolysis of untreated feedstocks and 185-387kJ/mol for AHRs. Combustion activation energy was 138-163kJ/mol for biomass and 119-158 for AHRs. The non-linear least squares method was used to determine reaction model and pre-exponential factor. Pyrolysis and combustion of biomass were best modelled by a combination of third order reaction and 3 dimensional diffusion models, while AHR decomposed following the third order reaction for pyrolysis and the 3 dimensional diffusion for combustion.
Resumo:
Context Many large organizations juggle an application portfolio that contains different applications that fulfill similar tasks in the organization. In an effort to reduce operating costs, they are attempting to consolidate such applications. Before consolidating applications, the work that is done with these applications must be harmonized. This is also known as process harmonization. Objective The increased interest in process harmonization calls for measures to quantify the extent to which processes have been harmonized. These measures should also uncover the factors that are of interest when harmonizing processes. Currently, such measures do not exist. Therefore, this study develops and validates a measurement model to quantify the level of process harmonization in an organization. Method The measurement model was developed by means of a literature study and structured interviews. Subsequently, it was validated through a survey, using factor analysis and correlations with known related constructs. Results As a result, a valid and reliable measurement model was developed. The factors that are found to constitute process harmonization are: the technical design of the business process and its data, the resources that execute the process, and the information systems that are used in the process. In addition, strong correlations were found between process harmonization and process standardization and between process complexity and process harmonization. Conclusion The measurement model can be used by practitioners, because it shows them the factors that must be taken into account when harmonizing processes, and because it provides them with a means to quantify the extent to which they succeeded in harmonizing their processes. At the same time, it can be used by researchers to conduct further empirical research in the area of process harmonization.
Resumo:
Financing is a critical entrepreneurial activity (Shane et al. 2003) and within the study of entrepreneurship, behaviour has been identified as an area requiring further exploration (Bird et al. 2012). Since 2008 supply side conditions for SMEs have been severe and increasingly entrepreneurs have to bundle or ‘orchestrate’ funding from a variety of sources in order to successfully finance the firm (Wright and Stigliani 2013: p.15). This longitudinal study uses psychometric testing to measure the behavioural competences of a panel of sixty entrepreneurs in the Creative Industries sector. Interviews were conducted over a 3 year period to identify finance finding behaviour. The research takes a pragmatic realism perspective to examine process and the different behavioural competences of entrepreneurs. The predictive qualities of this behaviour are explored in a funding context. The research confirmed a strong behavioural characteristic as validated through interviews and psychometric testing, was an orientation towards engagement and working with other organisations. In a funding context, this manifested itself in entrepreneurs using networks, seeking advice and sharing equity to fund growth. These co-operative, collaborative characteristics are different to the classic image of the entrepreneur as a risk-taker or extrovert. Leadership and achievement orientation were amongst the lowest scores. Three distinctive groups were identified and also shown by subsequent analysis to be a positive contribution to how entrepreneurial behavioural competences can be considered. Belonging to one of these three clusters is a strong predictive indicator of entrepreneurial behaviour – in this context, how entrepreneurs access finance. These Clusters were also proven to have different characteristics in relation to funding outcomes. The study seeks to make a contribution through the development of a methodology for entrepreneurs, policy makers and financial institutions to identify competencies in finding finance and overcome problems in information asymmetry.
Resumo:
The aim of this survey was to review 187 transcripts from the United Kingdom’s General Optical Council (GOC) Disciplinary and Fitness To Practise (FTP) Committee hearings from 2001 to 2011 in order to identify common themes and thereby help practitioners to avoid the more frequently occurring pitfalls that were recorded during this period. The study covered changes in GOC FTP regulations in 2005, which involved a change from a disciplinary to a fitness to practise process. The number of cases was very small compared to the total number of optometrist and dispensing optician registrants, which was 13709 in 2001-02 rising to 18582 in 2010-11. The main findings indicated that between 2001 and 2011 there was a three times greater likelihood that male registrants versus female registrants would be brought in front of a GOC Disciplinary or FTP Committee. In terms of erasures from the GOC registers between 2001 and 2011, male registrants were also more likely to be erased than females. The male: female split for erasures between 2001 and 2011 was five: one, increasing to seven: one when considering the situation post the 2005 GOC FTP rule change. Of the cases brought before the Disciplinary and FTP Committees between 2001 and 2011, it was noted that cases implicating theft and fraud were most frequent representing 27% of hearings examined (17% involving NHS fraud and 10% theft or fraud from an employer). The examination of transcripts revealed other hearings were more complex. These hearings often had a primary reason for the investigation that highlighted further secondary concerns that also required investigation.
Resumo:
The author’s ideas on the soft budget constraint (SBC) were first expressed in 1976. Much progress has been made in understanding the problem over the ensuing four decades. The study takes issue with those who confine the concept to the process of bailing out loss-making socialist firms. It shows how the syndrome can appear in various organizations and forms in many spheres of the economy and points to the various means available for financial rescue. Single bailouts do not as such generate the SBC syndrome. It develops where the SBC becomes built into expectations. Special heed is paid to features generated by the syndrome in rescuer and rescuee organizations. The study reports on the spread of the syndrome in various periods of the socialist and the capitalist system, in various sectors. The author expresses his views on normative questions and on therapies against the harmful effects. He deals first with actual practice, then places the theory of the SBC in the sphere of ideas and models, showing how it relates to other theoretical trends, including institutional and behavioural economics and theories of moral hazard and inconsistency in time. He shows how far the intellectual apparatus of the SBC has spread in theoretical literature and where it has reached in the process of “canonization” by the economics profession. Finally, he reviews the main research tasks ahead.
Resumo:
In this article we aimed to present and analyse the 21st century history of bank financing in the Hungarian small and medium enterprise (SME) sector in the period ranging from 2000 to 2012. The credit products offered by banks and credit unions are the most fundamental means of external financing capable of fulfilling the financing needs of a wide array of SMEs. The conditions of accessing credits and their prices exert a decisive influence on the profitability and business opportunities of SMEs. As a result of economic slowdown SMEs had to face higher interest rates, decreasing credit limits, and bank financing options that became increasingly slowly accessible alongside stricter conditions. Due to this process SMEs business performance had been falling continuously which has a destructive contribution to the national economy. In the first chapter of the article we present the dynamic development of credit financing in the Hungarian SME sector, along with the causes that triggered it, then we will continue with the negative tendencies dating from the onset of the 2008 debt crisis. In the second chapter we discuss the vicious circle, due to which the business performance of the SMEs, as well as the conditions of access to credits and their prices, have entered into a negative spiral. In the third and final chapter we make suggestions regarding the direction and means of necessary government intervention, in order to stop and reverse the negative tendencies observed in SME credit financing.
Resumo:
Strategy is highly important for organisational success and the achievement of competitive advantage. Strategy is dynamic and it depends on accurate individual decision-making from medium and high-level managers and executives. Since managers always formulate strategy, its formulation depends mostly on their assertive decisions. Making good decisions is a complex task, even more in today’s business world where a large quantity of information and a dynamic environment forces people to decide without having complete information. As Shafir, Simonson, & Tversky (1993) point out, "the making of decisions, both big and small, is often difficult because of uncertainty and conflict". In this paper the author will explain a basic theoretical framework about top manager's individual decision-making, showing how complex the process of making high-impact decisions is; then, he will compare this theory with one of the most important streams in strategic management, the Resource-Based View (RBV) of the firm. Finally, within the context of individual decision-making and the RBV stream, the author will show how individual decision makers in top management positions constitute a valuable, rare, non-imitable and non-substitutable resource that provides sustained competitive advantage.
Resumo:
The purpose of this study was to assess the knowledge of public school administrators with respect to special education (ESE) law. The study used a sample of 220 public school administrators. A survey instrument was developed consisting of 19 demographic questions and 20 situational scenarios. The scenarios were based on ESE issues of discipline, due process (including IEP procedures), identification, evaluation, placement, and related services. The participants had to decide whether a violation of the ESE child's rights had occurred by marking: (a) Yes, (b) No, or (c) Undecided. An analysis of the scores and demographic information was done using a two-way analysis of variance, chi-square, and crosstabs after a 77% survey response rate.^ Research questions addressed the administrators' overall level of knowledge. Comparisons were made between principals and assistant principals and differences between the levels of schooling. Exploratory questions were concerned with ESE issues deemed problematic by administrators, effects of demographic variables on survey scores, and the listing of resources utilized by administrators to access ESE information.^ The study revealed: (a) a significant difference was found when comparing the number of ESE courses taken and the score on the survey, (b) the top five resources of ESE information were the region office, school ESE department chairs, ESE teachers, county workshops, and county inservices, (c) problematic areas included discipline, evaluation procedures, placement issues, and IEP due process concerns, (d) administrators as a group did not exhibit a satisfactory knowledge of ESE law with a mean score of 12 correct and 74% of responding administrators scoring in the unsatisfactory level (below 70%), (e) across school levels, elementary administrators scored significantly higher than high school administrators, and (f) a significant implication that assistant principals consistently scored higher than principals on each scenario with a significant difference at the high school level.^ The study reveals a vital need for administrators to receive additional preparation in order to possess a basic understanding of ESE school law and how it impacts their respective schools and school districts so that they might meet professional obligations and protect the rights of all individuals involved. Recommendations for this additional administrative preparation and further research topics were discussed. ^
Resumo:
The purpose of the present research is to demonstrate the influence of a fair price (independent of the subjective evaluation of the price magnitude) on buyers' willingness to purchase. The perceived fairness of a price is conceived to have three components: perceived equity, perceived need, and inferred compliance of the seller to the process rules of pricing. These components reflect the Theories of Distributive Justice (as adjusted for conditions of need) and Procedural Justice.^ The effect of the three components of a fair price on willingness to purchase is depicted in a theoretically causal chain model. Based on the Theories of Dissonance and Attribution, conditions of inequity and need activate concerns for Procedural Justice. Under conditions of inequity and need, buyers tend to infer that the seller has not complied with the generally accepted pricing practices, thus violating the social norms of Procedural justice. Inferred violations of Procedural Justice influence the buyer's attitude toward the seller. As predicted by the Theory of Reasoned Action, attitude is then positively related to willingness to purchase.^ The model was tested with a survey-based experiment conducted with 408 respondents. Two levels of both equity and need were manipulated with scenarios, a common research method in studies of Distributive and Procedural Justice. The data were analyzed with a structural equation model using LISREL. Although the effect of the "need" manipulation was insignificant, the results indicated a good fit of the model (Chi-square = 281, Degrees of Freedom = 104, Goodness of Fit Index =.924). The conclusion is that the fairness of a price does have a significant effect on willingness to purchase, independent of the subjective evaluation of the objective price. ^
Resumo:
A model was tested to examine relationships among leadership behaviors, team diversity, and team process measures with team performance and satisfaction at both the team and leader-member levels of analysis. Relationships between leadership behavior and team demographic and cognitive diversity were hypothesized to have both direct effects on organizational outcomes as well as indirect effects through team processes. Leader member differences were investigated to determine the effects of leader-member diversity leader-member exchange quality, individual effectiveness and satisfaction.^ Leadership had little direct effect on team performance, but several strong positive indirect effects through team processes. Demographic Diversity had no impact on team processes, directly impacted only one performance measure, and moderated the leadership to team process relationship.^ Cognitive Diversity had a number of direct and indirect effects on team performance, the net effects uniformly positive, and did not moderate the leadership to team process relationship.^ In sum, the team model suggests a complex combination of leadership behaviors positively impacting team processes, demographic diversity having little impact on team process or performance, cognitive diversity having a positive net impact impact, and team processes having mixed effects on team outcomes.^ At the leader-member level, leadership behaviors were a strong predictor of Leader-Member Exchange (LMX) quality. Leader-member demographic and cognitive dissimilarity were each predictors of LMX quality, but failed to moderate the leader behavior to LMX quality relationship. LMX quality was strongly and positively related to self reported effectiveness and satisfaction.^ The study makes several contributions to the literature. First, it explicitly links leadership and team diversity. Second, demographic and cognitive diversity are conceptualized as distinct and multi-faceted constructs. Third, a methodology for creating an index of categorical demographic and interval cognitive measures is provided so that diversity can be measured in a holistic conjoint fashion. Fourth, the study simultaneously investigates the impact of diversity at the team and leader-member levels of analyses. Fifth, insights into the moderating impact of different forms of team diversity on the leadership to team process relationship are provided. Sixth, this study incorporates a wide range of objective and independent measures to provide a 360$\sp\circ$ assessment of team performance. ^
Resumo:
The total time a customer spends in the business process system, called the customer cycle-time, is a major contributor to overall customer satisfaction. Business process analysts and designers are frequently asked to design process solutions with optimal performance. Simulation models have been very popular to quantitatively evaluate the business processes; however, simulation is time-consuming and it also requires extensive modeling experiences to develop simulation models. Moreover, simulation models neither provide recommendations nor yield optimal solutions for business process design. A queueing network model is a good analytical approach toward business process analysis and design, and can provide a useful abstraction of a business process. However, the existing queueing network models were developed based on telephone systems or applied to manufacturing processes in which machine servers dominate the system. In a business process, the servers are usually people. The characteristics of human servers should be taken into account by the queueing model, i.e. specialization and coordination. ^ The research described in this dissertation develops an open queueing network model to do a quick analysis of business processes. Additionally, optimization models are developed to provide optimal business process designs. The queueing network model extends and improves upon existing multi-class open-queueing network models (MOQN) so that the customer flow in the human-server oriented processes can be modeled. The optimization models help business process designers to find the optimal design of a business process with consideration of specialization and coordination. ^ The main findings of the research are, first, parallelization can reduce the cycle-time for those customer classes that require more than one parallel activity; however, the coordination time due to the parallelization overwhelms the savings from parallelization under the high utilization servers since the waiting time significantly increases, thus the cycle-time increases. Third, the level of industrial technology employed by a company and coordination time to mange the tasks have strongest impact on the business process design; as the level of industrial technology employed by the company is high; more division is required to improve the cycle-time; as the coordination time required is high; consolidation is required to improve the cycle-time. ^
Resumo:
In recent years, wireless communication infrastructures have been widely deployed for both personal and business applications. IEEE 802.11 series Wireless Local Area Network (WLAN) standards attract lots of attention due to their low cost and high data rate. Wireless ad hoc networks which use IEEE 802.11 standards are one of hot spots of recent network research. Designing appropriate Media Access Control (MAC) layer protocols is one of the key issues for wireless ad hoc networks. ^ Existing wireless applications typically use omni-directional antennas. When using an omni-directional antenna, the gain of the antenna in all directions is the same. Due to the nature of the Distributed Coordination Function (DCF) mechanism of IEEE 802.11 standards, only one of the one-hop neighbors can send data at one time. Nodes other than the sender and the receiver must be either in idle or listening state, otherwise collisions could occur. The downside of the omni-directionality of antennas is that the spatial reuse ratio is low and the capacity of the network is considerably limited. ^ It is therefore obvious that the directional antenna has been introduced to improve spatial reutilization. As we know, a directional antenna has the following benefits. It can improve transport capacity by decreasing interference of a directional main lobe. It can increase coverage range due to a higher SINR (Signal Interference to Noise Ratio), i.e., with the same power consumption, better connectivity can be achieved. And the usage of power can be reduced, i.e., for the same coverage, a transmitter can reduce its power consumption. ^ To utilizing the advantages of directional antennas, we propose a relay-enabled MAC protocol. Two relay nodes are chosen to forward data when the channel condition of direct link from the sender to the receiver is poor. The two relay nodes can transfer data at the same time and a pipelined data transmission can be achieved by using directional antennas. The throughput can be improved significant when introducing the relay-enabled MAC protocol. ^ Besides the strong points, directional antennas also have some explicit drawbacks, such as the hidden terminal and deafness problems and the requirements of retaining location information for each node. Therefore, an omni-directional antenna should be used in some situations. The combination use of omni-directional and directional antennas leads to the problem of configuring heterogeneous antennas, i e., given a network topology and a traffic pattern, we need to find a tradeoff between using omni-directional and using directional antennas to obtain a better network performance over this configuration. ^ Directly and mathematically establishing the relationship between the network performance and the antenna configurations is extremely difficult, if not intractable. Therefore, in this research, we proposed several clustering-based methods to obtain approximate solutions for heterogeneous antennas configuration problem, which can improve network performance significantly. ^ Our proposed methods consist of two steps. The first step (i.e., clustering links) is to cluster the links into different groups based on the matrix-based system model. After being clustered, the links in the same group have similar neighborhood nodes and will use the same type of antenna. The second step (i.e., labeling links) is to decide the type of antenna for each group. For heterogeneous antennas, some groups of links will use directional antenna and others will adopt omni-directional antenna. Experiments are conducted to compare the proposed methods with existing methods. Experimental results demonstrate that our clustering-based methods can improve the network performance significantly. ^
Resumo:
This study explores how great powers not allied with the United States formulate their grand strategies in a unipolar international system. Specifically, it analyzes the strategies China and Russia have developed to deal with U.S. hegemony by examining how Moscow and Beijing have responded to American intervention in Central Asia. The study argues that China and Russia have adopted a soft balancing strategy of to indirectly balance the United States at the regional level. This strategy uses normative capabilities such as soft power, alternative institutions and regionalization to offset the overwhelming material hardware of the hegemon. The theoretical and methodological approach of this dissertation is neoclassical realism. Chinese and Russian balancing efforts against the United States are based on their domestic dynamics as well as systemic constraints. Neoclassical realism provides a bridge between the internal characteristics of states and the environment which those states are situated. Because China and Russia do not have the hardware (military or economic power) to directly challenge the United States, they must resort to their software (soft power and norms) to indirectly counter American preferences and set the agenda to obtain their own interests. Neoclassical realism maintains that soft power is an extension of hard power and a reflection of the internal makeup of states. The dissertation uses the heuristic case study method to demonstrate the efficacy of soft balancing. Such case studies help to facilitate theory construction and are not necessarily the demonstrable final say on how states behave under given contexts. Nevertheless, it finds that China and Russia have increased their soft power to counterbalance the United States in certain regions of the world, Central Asia in particular. The conclusion explains how soft balancing can be integrated into the overall balance-of-power framework to explain Chinese and Russian responses to U.S. hegemony. It also suggests that an analysis of norms and soft power should be integrated into the study of grand strategy, including both foreign policy and military doctrine.