877 resultados para Case Base Reasoning
Resumo:
Silicon Carbide Bipolar Junction Transistors require a continuous base current in the on-state. This base current is usually made constant and is corresponding to the maximum collector current and maximum junction temperature that is foreseen in a certain application. In this paper, a discretized proportional base driver is proposed which will reduce, for the right application, the steady-state power consumption of the base driver. The operation of the proposed base driver has been verified experimentally, driving a 1200V/40A SiC BJT in a DC-DC boost converter. In order to determine the potential reduction of the power consumption of the base driver, a case with a dc-dc converter in an ideal electric vehicle driving the new European drive cycle has been investigated. It is found that the steady-state power consumption of the base driver can be reduced by approximately 63 %. The total reduction of the driver consumption is 2816 J during the drive cycle, which is slightly more than the total on-state losses for the SiC BJTs used in the converter. © 2013 IEEE.
Resumo:
Silicon carbide (SiC) bipolar junction transistors (BJTs) require a continuous base current in the on-state. This base current is usually made constant and is corresponding to the maximum collector current and maximum junction temperature that is foreseen in a certain application. In this paper, a discretized proportional base driver is proposed which will reduce, for the right application, the steady-state power consumption of the base driver. The operation of the proposed base driver has been verified experimentally, driving a 1200-V/40-A SiC BJT in a dc-dc boost converter. In order to determine the potential reduction of the power consumption of the base driver, a case with a dc-dc converter in an ideal electric vehicle driving the new European drive cycle has been investigated. It is found that the steady-state power consumption of the base driver can be reduced by approximately 60%. The total reduction of the driver consumption is 3459 J during the drive cycle, which is slightly more than the total on-state losses for the SiC BJTs used in the converter. © 2013 IEEE.
Resumo:
The behaviors of double proton transfer (DPT) occurring in a representative glycinamide-formamidine complex have been investigated employing the B3LYP/6-311++G** level of theory. Computational results suggest that the participation of a formamidine molecule favors the proceeding of the proton transfer (PT) for glycinamide compared with that without mediator-assisted case. The DPT process proceeds with a concerted mechanism rather than a stepwise one since no zwitterionic complexes have been located during the DPT process. The barrier heights are 14.4 and 3.9 kcal/mol for the forward and reverse directions, respectively. However, both of them have been reduced by 3.1 and 2.9 kcal/mol to 11.3 and 1.0 kcal/mol with further inclusion of zero-point vibrational energy (ZPVE) corrections, where the lower reverse barrier height implies that the reverse reaction should proceed easily at any temperature of biological importance. Additionally, the one-electron oxidation process for the double H-bonded glycinamide-formamidine complex has also been investigated. The oxidated product is characterized by a distonic radical cation due to the fact that one-electron oxidation takes place on glycinamide fragment and a proton has been transferred from glycinamide to formamidine fragment spontaneously. As a result, the vertical and adiabatic ionization potentials for the neutral double H-bonded complex have been determined to be about 8.46 and 7.73 eV, respectively, where both of them have been reduced by about 0.79 and 0.87 eV relative to those of isolated glycinamide due to the formation of the intermolecular H-bond with formamidine. Finally, the differences between model system and adenine-thymine base pair have been discussed briefly.
Resumo:
A 2.5-D and 3-D multi-fold GPR survey was carried out in the Archaeological Park of Aquileia (northern Italy). The primary objective of the study was the identification of targets of potential archaeological interest in an area designated by local archaeological authorities. The second geophysical objective was to test 2-D and 3-D multi-fold methods and to study localised targets of unknown shape and dimensions in hostile soil conditions. Several portions of the acquisition grid were processed in common offset (CO), common shot (CSG) and common mid point (CMP) geometry. An 8×8 m area was studied with orthogonal CMPs thus achieving a 3-D subsurface coverage with azimuthal range limited to two normal components. Coherent noise components were identified in the pre-stack domain and removed by means of FK filtering of CMP records. Stack velocities were obtained from conventional velocity analysis and azimuthal velocity analysis of 3-D pre-stack gathers. Two major discontinuities were identified in the area of study. The deeper one most probably coincides with the paleosol at the base of the layer associated with activities of man in the area in the last 2500 years. This interpretation is in agreement with the results obtained from nearby cores and excavations. The shallow discontinuity is observed in a part of the investigated area and it shows local interruptions with a linear distribution on the grid. Such interruptions may correspond to buried targets of archaeological interest. The prominent enhancement of the subsurface images obtained by means of multi-fold techniques, compared with the relatively poor quality of the conventional single-fold georadar sections, indicates that multi-fold methods are well suited for the application to high resolution studies in archaeology.
Resumo:
How can we insure that knowledge embedded in a program is applied effectively? Traditionally the answer to this question has been sought in different problem solving paradigms and in different approaches to encoding and indexing knowledge. Each of these is useful with a certain variety of problem, but they all share a common problem: they become ineffective in the face of a sufficiently large knowledge base. How then can we make it possible for a system to continue to function in the face of a very large number of plausibly useful chunks of knowledge? In response to this question we propose a framework for viewing issues of knowledge indexing and retrieval, a framework that includes what appears to be a useful perspective on the concept of a strategy. We view strategies as a means of controlling invocation in situations where traditional selection mechanisms become ineffective. We examine ways to effect such control, and describe meta-rules, a means of specifying strategies which offers a number of advantages. We consider at some length how and when it is useful to reason about control, and explore the advantages meta-rules offer for doing this.
Resumo:
This report investigates some techinques appropriate to representing the knowledge necessary for understanding a class of electronic machines -- radio receivers. A computational performance model - WATSON - is presented. WATSONs task is to isolate failures in radio receivers whose principles of operation have been appropriately described in his knowledge base. The thesis of the report is that hierarchically organized representational structures are essential to the understanding of complex mechanisms. Such structures lead not only to descriptions of machine operation at many levels of detail, but also offer a powerful means of organizing "specialist" knowledge for the repair of machines when they are broken.
Resumo:
Garrett S.M. and Lee M.H., A Case-Based Approach to Black-Box Control Learning, Proc. Int. Conf. on Comutational Intelligence for Modelling, Control and Automation (CIMCA`99), 17-19 Feb. 1999. Vienna.
Resumo:
M.H. Lee, On Models, Modelling and the Distinctive Nature of Model-Based Reasoning, AI Communications, 12 (3), pp127-137.1999.
Resumo:
Tese de Doutoramento apresentada à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Doutror em Ciências da Terra.
Resumo:
A notable feature of the surveillance case law of the European Court of Human Rights has been the tendency of the Court to focus on the “in accordance with the law” aspect of the Article 8 ECHR inquiry. This focus has been the subject of some criticism, but the impact of this approach on the manner in which domestic surveillance legislation has been formulated in the Party States has received little scholarly attention. This thesis addresses that gap in the literature through its consideration of the Interception of Postal Packets and Telecommunications Messages (Regulation) Act, 1993 and the Criminal Justice (Surveillance) Act, 2009. While both Acts provide several of the safeguards endorsed by the European Court of Human Rights, this thesis finds that they suffer from a number of crucial weaknesses that undermine the protection of privacy. This thesis demonstrates how the focus of the European Court of Human Rights on the “in accordance with the law” test has resulted in some positive legislative change. Notwithstanding this fact, it is maintained that the legality approach has gained prominence at the expense of a full consideration of the “necessary in a democratic society” inquiry. This has resulted in superficial legislative responses at the domestic level, including from the Irish government. Notably, through the examination of a number of more recent cases, this project discerns a significant alteration in the interpretive approach adopted by the European Court of Human Rights regarding the application of the necessity test. The implications of this development are considered and the outlook for Irish surveillance legislation is assessed.
Resumo:
In decision making problems where we need to choose a particular decision or alternative from a set of possible choices, we often have some preferences which determine if we prefer one decision over another. When these preferences give us an ordering on the decisions that is complete, then it is easy to choose the best or one of the best decisions. However it often occurs that the preferences relation is partially ordered, and we have no best decision. In this thesis, we look at what happens when we have such a partial order over a set of decisions, in particular when we have multiple orderings on a set of decisions, and we present a framework for qualitative decision making. We look at the different natural notions of optimal decision that occur in this framework, which gives us different optimality classes, and we examine the relationships between these classes. We then look in particular at a qualitative preference relation called Sorted-Pareto Dominance, which is an extension of Pareto Dominance, and we give a semantics for this relation as one that is compatible with any order-preserving mapping of an ordinal preference scale to a numerical one. We apply Sorted-Pareto dominance to a Soft Constraints setting, where we solve problems in which the soft constraints associate qualitative preferences to decisions in a decision problem. We also examine the Sorted-Pareto dominance relation in the context of our qualitative decision making framework, looking at the relevant optimality classes for the Sorted-Pareto case, which gives us classes of decisions that are necessarily optimal, and optimal for some choice of mapping of an ordinal scale to a quantitative one. We provide some empirical analysis of Sorted-Pareto constraints problems and examine the optimality classes that result.
Resumo:
Antifungal compounds produced by Lactic acid bacteria (LAB) metabolites can be natural and reliable alternative for reducing fungal infections pre- and post-harvest with a multitude of additional advantages for cereal-base products. Toxigenic and spoilage fungi are responsible for numerous diseases and economic losses. This thesis includes an overview of the impact fungi have on aspects of the cereal food chain. The applicability of LAB in plant protection and cereal industry is discussed in detail. Specific case studies include Fusarium head blight, and the impact of fungi in the malting and baking industry. The impact of Fusarium culmorum infected raw barley on the final malt quality was part of the investigation. In vitro infected barley grains were fully characterized. The study showed that the germinative energy of infected barley grains decreased by 45% and grains accumulated 199 μg.kg-1 of deoxynivalenol (DON). Barley grains were subsequently malted and fully characterized. Fungal biomass increased during all stages of malting. Infected malt accumulated 8-times its DON concentration during malting. Infected malt grains revealed extreme structural changes due to proteolytic, (hemi)-cellulolytic and starch degrading activity of the fungi, this led to increased friability and fragmentation. Infected grains also had higher protease and β-glucanase activities, lower amylase activity, a greater proportion of free amino and soluble nitrogen, and a lower β-glucan content. Malt loss was over 27% higher in infected malt when compared to the control. The protein compositional changes and respective enzymatic activity of infected barley and respective malt were characterized using a wide range of methods. F. culmorum infected barley grains showed an increase in proteolytic activity and protein extractability. Several metabolic proteins decreased and increased at different rates during infection and malting, showing a complex F. culmorum infection interdependence. In vitro F. culmorum infected malt was used to produce lager beer to investigate changes caused by the fungi during the brewing processes and their effect on beer quality attributes. It was found, that the wort containing infected malt had a lower pH, a higher FAN, higher β-glucan and a 45% increase in the purging rate, and led to premature yeast flocculation. The beer produced with infected malt (IB) had also a significantly different amino acid profile. IB flavour characterization revealed a higher concentration of esters, fusel alcohols, fatty acids, ketones, and dimethylsulfide, and in particular, acetaldehyde, when compared to the control. IB had a greater proportion of Strecker aldehydes and Maillard products contributing to an increased beer staling character. IB resulted in a 67% darker colour with a trend to better foam stability. It was also found that 78% of the accumulated mycotoxin deoxynivalenol in the malt was transferred into beer. A LAB cell-freesupernatant (cfs), produced in wort-base substrate, was investigated for its ability to inhibit Fusarium growth during malting. Wort was a suitable substrate for LAB exhibiting antifungal activity. Lactobacillus amylovorus DSM19280 inhibited 104 spores.mL-1 for 7 days, after 120 h of fermentation, while Lactobacillus reuteri R29 inhibited 105 spores.mL-1 for 7 days, after 48 h of fermentation. Both LAB cfs had significant different organic acid profiles. Acid-base antifungal compounds were identified and, phenyllactic, hydroxy-phenyllactic, and benzoic acids were present in higher concentrations when compared to the control. A 3 °P wort substrate inoculated With L. reuteri R29 (cfs) was applied in malting and successfully inhibited Fusarium growth by 23%, and mycotoxin DON by 80%. Malt attributes resulted in highly modified grains, lower pH, higher colouration, and higher extract yield. The implementation of selected LAB producing antifungal compounds can be used successfully in the malting process to reduce mould growth and mycotoxin production.
Resumo:
The pervasive use of mobile technologies has provided new opportunities for organisations to achieve competitive advantage by using a value network of partners to create value for multiple users. The delivery of a mobile payment (m-payment) system is an example of a value network as it requires the collaboration of multiple partners from diverse industries, each bringing their own expertise, motivations and expectations. Consequently, managing partnerships has been identified as a core competence required by organisations to form viable partnerships in an m-payment value network and an important factor in determining the sustainability of an m-payment business model. However, there is evidence that organisations lack this competence which has been witnessed in the m-payment domain where it has been attributed as an influencing factor in a number of failed m-payment initiatives since 2000. In response to this organisational deficiency, this research project leverages the use of design thinking and visualisation tools to enhance communication and understanding between managers who are responsible for managing partnerships within the m-payment domain. By adopting a design science research approach, which is a problem solving paradigm, the research builds and evaluates a visualisation tool in the form of a Partnership Management Canvas. In doing so, this study demonstrates that when organisations encourage their managers to adopt design thinking, as a way to balance their analytical thinking and intuitive thinking, communication and understanding between the partners increases. This can lead to a shared understanding and a shared commitment between the partners. In addition, the research identifies a number of key business model design issues that need to be considered by researchers and practitioners when designing an m-payment business model. As an applied research project, the study makes valuable contributions to the knowledge base and to the practice of management.
Resumo:
In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.
Resumo:
Consumer demand is revolutionizing the way products are being produced, distributed and marketed. In relation to the dairy sector in developing countries, aspects of milk quality are receiving more attention from both society and the government. However, milk quality management needs to be better addressed in dairy production systems to guarantee the access of stakeholders, mainly small-holders, into dairy markets. The present study is focused on an analysis of the interaction of the upstream part of the dairy supply chain (farmers and dairies) in the Mantaro Valley (Peruvian central Andes), in order to understand possible constraints both stakeholders face implementing milk quality controls and practices; and evaluate “ex-ante” how different strategies suggested to improve milk quality could affect farmers and processors’ profits. The analysis is based on three complementary field studies conducted between 2012 and 2013. Our work has shown that the presence of a dual supply chain combining both formal and informal markets has a direct impact on dairy production at the technical and organizational levels, affecting small formal dairy processors’ possibilities to implement contracts, including agreements on milk quality standards. The analysis of milk quality management from farms to dairy plants highlighted the poor hygiene in the study area, even when average values of milk composition were usually high. Some husbandry practices evaluated at farm level demonstrated cost effectiveness and a big impact on hygienic quality; however, regular application of these practices was limited, since small-scale farmers do not receive a bonus for producing hygienic milk. On the basis of these two results, we co-designed with formal small-scale dairy processors a simulation tool to show prospective scenarios, in which they could select their best product portfolio but also design milk payment systems to reward farmers’ with high milk quality performances. This type of approach allowed dairy processors to realize the importance of including milk quality management in their collection and manufacturing processes, especially in a context of high competition for milk supply. We concluded that the improvement of milk quality in a smallholder farming context requires a more coordinated effort among stakeholders. Successful implementation of strategies will depend on the willingness of small-scale dairy processors to reward farmers producing high milk quality; but also on the support from the State to provide incentives to the stakeholders in the formal sector.