955 resultados para Load flow, Optimization of power systems
Resumo:
A radiative equation of the Cattaneo–Vernotte type is derived from information theory and the radiative transfer equation. The equation thus derived is a radiative analog of the equation that is used for the description of hyperbolic heat conduction. It is shown, without recourse to any phenomenological assumption, that radiative transfer may be included in a natural way in the framework of extendedirreversible thermodynamics
Resumo:
Dynamic Nuclear Polarization (DNP) is an emerging technique that could revolutionize the NMR study of small molecules at very low concentrations by the increase in sensitivity that results from transfer of polarization between electronic and nuclear spins. Although the underlying physics has been known for a long time, in the last few years there has been a lot of excitement on the chemistry and biology NMR community caused by the demonstration that the highly polarized nuclei that are prepared in solid state at very low temperatures (1-2 K) could be rapidly transferred to liquid samples at room temperature and studied in solution by conventional NMR techniques. In favorable cases several order of magnitude increases in sensitivity have been achieved. The technique is now mature enough that a commercial instrument is available. The efficiency of DNP depends on two crucial aspects: i) the efficiency of the nuclear polarization process and ii) the efficiency of the transfer from the initial solid state to the fluid state in which NMR is measured. The preferred areas of application (iii) will be dictated by situations in which the low concentration of the sample or its intrinsic low receptivity are the limiting factors .
Resumo:
The differential diagnosis of urinary incontinence classes is sometimes difficult to establish. As a rule, only the results of urodynamic testing allow an accurate diagnosis. However, this exam is not always feasible, because it requires special equipment, and also trained personnel to lead and interpret the exam. Some expert systems have been developed to assist health professionals in this field. Therefore, the aims of this paper are to present the definition of Artificial Intelligence; to explain what Expert System and System for Decision Support are and its application in the field of health and to discuss some expert systems for differential diagnosis of urinary incontinence. It is concluded that expert systems may be useful not only for teaching purposes, but also as decision support in daily clinical practice. Despite this, for several reasons, health professionals usually hesitate to use the computer expert system to support their decision making process.
Resumo:
El tema de este estudio es el aumento de la comprensión teórica y empírica de la estrategia de negocio de código abierto en el dominio de sistemas embebidos por investigar modelos de negocios de código abierto, retos, recursos y capacidades operativas y dinámicas.
Resumo:
One of the key emphases of these three essays is to provide practical managerial insight. However, good practical insight, can only be created by grounding it firmly on theoretical and empirical research. Practical experience-based understanding without theoretical grounding remains tacit and cannot be easily disseminated. Theoretical understanding without links to real life remains sterile. My studies aim to increase the understanding of how radical innovation could be generated at large established firms and how it can have an impact on business performance as most businesses pursue innovation with one prime objective: value creation. My studies focus on large established firms with sales revenue exceeding USD $ 1 billion. Usually large established firms cannot rely on informal ways of management, as these firms tend to be multinational businesses operating with subsidiaries, offices, or production facilities in more than one country. I. Internal and External Determinants of Corporate Venture Capital Investment The goal of this chapter is to focus on CVC as one of the mechanisms available for established firms to source new ideas that can be exploited. We explore the internal and external determinants under which established firms engage in CVC to source new knowledge through investment in startups. We attempt to make scholars and managers aware of the forces that influence CVC activity by providing findings and insights to facilitate the strategic management of CVC. There are research opportunities to further understand the CVC phenomenon. Why do companies engage in CVC? What motivates them to continue "playing the game" and keep their active CVC investment status. The study examines CVC investment activity, and the importance of understanding the influential factors that make a firm decide to engage in CVC. The main question is: How do established firms' CVC programs adapt to changing internal conditions and external environments. Adaptation typically involves learning from exploratory endeavors, which enable companies to transform the ways they compete (Guth & Ginsberg, 1990). Our study extends the current stream of research on CVC. It aims to contribute to the literature by providing an extensive comparison of internal and external determinants leading to CVC investment activity. To our knowledge, this is the first study to examine the influence of internal and external determinants on CVC activity throughout specific expansion and contraction periods determined by structural breaks occurring between 1985 to 2008. Our econometric analysis indicates a strong and significant positive association between CVC activity and R&D, cash flow availability and environmental financial market conditions, as well as a significant negative association between sales growth and the decision to engage into CVC. The analysis of this study reveals that CVC investment is highly volatile, as demonstrated by dramatic fluctuations in CVC investment activity over the past decades. When analyzing the overall cyclical CVC period from 1985 to 2008 the results of our study suggest that CVC activity has a pattern influenced by financial factors such as the level of R&D, free cash flow, lack of sales growth, and external conditions of the economy, with the NASDAQ price index as the most significant variable influencing CVC during this period. II. Contribution of CVC and its Interaction with R&D to Value Creation The second essay takes into account the demands of corporate executives and shareholders regarding business performance and value creation justifications for investments in innovation. Billions of dollars are invested in CVC and R&D. However there is little evidence that CVC and its interaction with R&D create value. Firms operating in dynamic business sectors seek to innovate to create the value demanded by changing market conditions, consumer preferences, and competitive offerings. Consequently, firms operating in such business sectors put a premium on finding new, sustainable and competitive value propositions. CVC and R&D can help them in this challenge. Dushnitsky and Lenox (2006) presented evidence that CVC investment is associated with value creation. However, studies have shown that the most innovative firms do not necessarily benefit from innovation. For instance Oyon (2007) indicated that between 1995 and 2005 the most innovative automotive companies did not obtain adequate rewards for shareholders. The interaction between CVC and R&D has generated much debate in the CVC literature. Some researchers see them as substitutes suggesting that firms have to choose between CVC and R&D (Hellmann, 2002), while others expect them to be complementary (Chesbrough & Tucci, 2004). This study explores the interaction that CVC and R&D have on value creation. This essay examines the impact of CVC and R&D on value creation over sixteen years across six business sectors and different geographical regions. Our findings suggest that the effect of CVC and its interaction with R&D on value creation is positive and significant. In dynamic business sectors technologies rapidly relinquish obsolete, consequently firms operating in such business sectors need to continuously develop new sources of value creation (Eisenhardt & Martin, 2000; Qualls, Olshavsky, & Michaels, 1981). We conclude that in order to impact value creation, firms operating in business sectors such as Engineering & Business Services, and Information Communication & Technology ought to consider CVC as a vital element of their innovation strategy. Moreover, regarding the CVC and R&D interaction effect, our findings suggest that R&D and CVC are complementary to value creation hence firms in certain business sectors can be better off supporting both R&D and CVC simultaneously to increase the probability of generating value creation. III. MCS and Organizational Structures for Radical Innovation Incremental innovation is necessary for continuous improvement but it does not provide a sustainable permanent source of competitiveness (Cooper, 2003). On the other hand, radical innovation pursuing new technologies and new market frontiers can generate new platforms for growth providing firms with competitive advantages and high economic margin rents (Duchesneau et al., 1979; Markides & Geroski, 2005; O'Connor & DeMartino, 2006; Utterback, 1994). Interestingly, not all companies distinguish between incremental and radical innovation, and more importantly firms that manage innovation through a one-sizefits- all process can almost guarantee a sub-optimization of certain systems and resources (Davila et al., 2006). Moreover, we conducted research on the utilization of MCS along with radical innovation and flexible organizational structures as these have been associated with firm growth (Cooper, 2003; Davila & Foster, 2005, 2007; Markides & Geroski, 2005; O'Connor & DeMartino, 2006). Davila et al. (2009) identified research opportunities for innovation management and provided a list of pending issues: How do companies manage the process of radical and incremental innovation? What are the performance measures companies use to manage radical ideas and how do they select them? The fundamental objective of this paper is to address the following research question: What are the processes, MCS, and organizational structures for generating radical innovation? Moreover, in recent years, research on innovation management has been conducted mainly at either the firm level (Birkinshaw, Hamel, & Mol, 2008a) or at the project level examining appropriate management techniques associated with high levels of uncertainty (Burgelman & Sayles, 1988; Dougherty & Heller, 1994; Jelinek & Schoonhoven, 1993; Kanter, North, Bernstein, & Williamson, 1990; Leifer et al., 2000). Therefore, we embarked on a novel process-related research framework to observe the process stages, MCS, and organizational structures that can generate radical innovation. This article is based on a case study at Alcan Engineered Products, a division of a multinational company provider of lightweight material solutions. Our observations suggest that incremental and radical innovation should be managed through different processes, MCS and organizational structures that ought to be activated and adapted contingent to the type of innovation that is being pursued (i.e. incremental or radical innovation). More importantly, we conclude that radical can be generated in a systematic way through enablers such as processes, MCS, and organizational structures. This is in line with the findings of Jelinek and Schoonhoven (1993) and Davila et al. (2006; 2007) who show that innovative firms have institutionalized mechanisms, arguing that radical innovation cannot occur in an organic environment where flexibility and consensus are the main managerial mechanisms. They rather argue that radical innovation requires a clear organizational structure and formal MCS.
Resumo:
Secondary accident statistics can be useful for studying the impact of traffic incident management strategies. An easy-to-implement methodology is presented for classifying secondary accidents using data fusion of a police accident database with intranet incident reports. A current method for classifying secondary accidents uses a static threshold that represents the spatial and temporal region of influence of the primary accident, such as two miles and one hour. An accident is considered secondary if it occurs upstream from the primary accident and is within the duration and queue of the primary accident. However, using the static threshold may result in both false positives and negatives because accident queues are constantly varying. The methodology presented in this report seeks to improve upon this existing method by making the threshold dynamic. An incident progression curve is used to mark the end of the queue throughout the entire incident. Four steps in the development of incident progression curves are described. Step one is the processing of intranet incident reports. Step two is the filling in of incomplete incident reports. Step three is the nonlinear regression of incident progression curves. Step four is the merging of individual incident progression curves into one master curve. To illustrate this methodology, 5,514 accidents from Missouri freeways were analyzed. The results show that secondary accidents identified by dynamic versus static thresholds can differ by more than 30%.
Resumo:
We address the problem of scheduling a multiclass $M/M/m$ queue with Bernoulli feedback on $m$ parallel servers to minimize time-average linear holding costs. We analyze the performance of a heuristic priority-index rule, which extends Klimov's optimal solution to the single-server case: servers select preemptively customers with larger Klimov indices. We present closed-form suboptimality bounds (approximate optimality) for Klimov's rule, which imply that its suboptimality gap is uniformly bounded above with respect to (i) external arrival rates, as long as they stay within system capacity;and (ii) the number of servers. It follows that its relativesuboptimality gap vanishes in a heavy-traffic limit, as external arrival rates approach system capacity (heavy-traffic optimality). We obtain simpler expressions for the special no-feedback case, where the heuristic reduces to the classical $c \mu$ rule. Our analysis is based on comparing the expected cost of Klimov's ruleto the value of a strong linear programming (LP) relaxation of the system's region of achievable performance of mean queue lengths. In order to obtain this relaxation, we derive and exploit a new set ofwork decomposition laws for the parallel-server system. We further report on the results of a computational study on the quality of the $c \mu$ rule for parallel scheduling.
Resumo:
Following the introduction of single-metal deposition (SMD), a simplified fingermark detection technique based on multimetal deposition, optimization studies were conducted. The different parameters of the original formula were tested and the results were evaluated based on the contrast and overall aspect of the enhanced fingermarks. The new formula for SMD was found based on the most optimized parameters. Interestingly, it was found that important variations from the base parameters did not significantly affect the outcome of the enhancement, thus demonstrating that SMD is a very robust technique. Finally, a comparison of the optimized SMD with multi-metal deposition (MMD) was carried out on different surfaces. It was demonstrated that SMD produces comparable results to MMD, thus validating the technique.
Resumo:
The origins of electoral systems have received scant attention in the literature. Looking at the history of electoral rules in the advanced world in the last century, this paper shows that the existing wide variation in electoral rules across nations can be traced to the strategic decisions that the current ruling parties, anticipating the coordinating consequences of different electoral regimes, make to maximize their representation according to the following conditions. On the one hand, as long as the electoral arena does not change substantially and the current electoral regime serves the ruling parties well, the latter have no incentives to modify the electoral regime. On the other hand, as soon as the electoral arena changes (due to the entry of new voters or a change in their preferences), the ruling parties will entertain changing the electoral system, depending on two main conditions: the emergence of new parties and the coordinating capacities of the old ruling parties. Accordingly, if the new parties are strong, the old parties shift from plurality/majority rules to proportional representation (PR) only if the latter are locked into a 'non-Duvergerian' equilibrium; i.e. if no old party enjoys a dominant position (the case of most small European states)--conversely, they do not if a Duvergerian equilibrium exists (the case of Great Britain). Similarly, whenever the new entrants are weak, a non-PR system is maintained, regardless of the structure of the old party system (the case of the USA). The paper discusses as well the role of trade and ethnic and religious heterogeneity in the adoption of PR rules.
Resumo:
The old, understudied electoral system composed of multi-member districts, open ballot and plurality rule is presented as the most remote scene of the origin of both political parties and new electoral systems. A survey of the uses of this set of electoral rules in different parts of the world during remote and recent periods shows its wide spread. A model of voting by this electoral system demonstrates that, while it can produce varied and pluralistic representation, it also provides incentives to form factional or partisan candidacies. Famous negative reactions to the emergence of factions and political parties during the 18th and 19th centuries are reinterpreted in this context. Many electoral rules and procedures invented since the second half of the 19th century, including the Australian ballot, single-member districts, limited and cumulative ballots, and proportional representation rules, derived from the search to reduce the effects of the originating multi-member district system in favor of a single party sweep. The general relations between political parties and electoral systems are restated to account for the foundational stage here discussed.
Resumo:
What are the best voting systems in terms of utilitarianism? Or in terms of maximin, or maximax? We study these questions for the case of three alternatives and a class of structurally equivalent voting rules. We show that plurality, arguably the most widely used voting system, performs very poorly in terms of remarkable ideals of justice, such as utilitarianism or maximin, and yet is optimal in terms of maximax. Utilitarianism is bestapproached by a voting system converging to the Borda count, while the best way to achieve maximin is by means of a voting system converging to negative voting. We study the robustness of our results across different social cultures, measures of performance, and population sizes.
Resumo:
We address the problem of scheduling a multi-station multiclassqueueing network (MQNET) with server changeover times to minimizesteady-state mean job holding costs. We present new lower boundson the best achievable cost that emerge as the values ofmathematical programming problems (linear, semidefinite, andconvex) over relaxed formulations of the system's achievableperformance region. The constraints on achievable performancedefining these formulations are obtained by formulatingsystem's equilibrium relations. Our contributions include: (1) aflow conservation interpretation and closed formulae for theconstraints previously derived by the potential function method;(2) new work decomposition laws for MQNETs; (3) new constraints(linear, convex, and semidefinite) on the performance region offirst and second moments of queue lengths for MQNETs; (4) a fastbound for a MQNET with N customer classes computed in N steps; (5)two heuristic scheduling policies: a priority-index policy, anda policy extracted from the solution of a linear programmingrelaxation.
Resumo:
Crushed seeds of the Moringa oleifera tree have been used traditionally as natural flocculants to clarify drinking water. We previously showed that one of the seed peptides mediates both the sedimentation of suspended particles such as bacterial cells and a direct bactericidal activity, raising the possibility that the two activities might be related. In this study, the conformational modeling of the peptide was coupled to a functional analysis of synthetic derivatives. This indicated that partly overlapping structural determinants mediate the sedimentation and antibacterial activities. Sedimentation requires a positively charged, glutamine-rich portion of the peptide that aggregates bacterial cells. The bactericidal activity was localized to a sequence prone to form a helix-loop-helix structural motif. Amino acid substitution showed that the bactericidal activity requires hydrophobic proline residues within the protruding loop. Vital dye staining indicated that treatment with peptides containing this motif results in bacterial membrane damage. Assembly of multiple copies of this structural motif into a branched peptide enhanced antibacterial activity, since low concentrations effectively kill bacteria such as Pseudomonas aeruginosa and Streptococcus pyogenes without displaying a toxic effect on human red blood cells. This study thus identifies a synthetic peptide with potent antibacterial activity against specific human pathogens. It also suggests partly distinct molecular mechanisms for each activity. Sedimentation may result from coupled flocculation and coagulation effects, while the bactericidal activity would require bacterial membrane destabilization by a hydrophobic loop.