948 resultados para distributed model
Resumo:
A comparative analysis of the theoretical-experimental study, developed by Hsu on the hydration of Amsoy 71 soybean grain, was performed through several soaking experiments using CD 202 soybean at 10, 20, 30, 40, and 50 °C, measuring moisture content over time. The results showed that CD 202 soybean equilibrium moisture content, Xeq, does not depend on temperature and is 21% higher than that found by Hsu, suggesting that soybean cultivar exerts great influence on Xeq. The Hsu model was numerically solved and its parameters were adjusted by the least squares method, with maximum deviations of +/- 10% relative to the experimental values. The limiting step in the mass transfer process during hydration corresponds to water diffusion inside the grain, leading to radial moisture gradients that decrease over time and with an increase in temperature. Regardless of the soybean cultivar, diffusivity increases as temperature or moisture content increases. However, the values of this transport property for Amsoy 71 were superior to those of CD 202, very close at the beginning of hydration at 20 °C and almost three times higher at the end of hydration at 50 °C.
Resumo:
Liberalization of electricity markets has resulted in a competed Nordic electricity market, in which electricity retailers play a key role as electricity suppliers, market intermediaries, and service providers. Although these roles may remain unchanged in the near future, the retailers’ operation may change fundamentally as a result of the emerging smart grid environment. Especially the increasing amount of distributed energy resources (DER), and improving opportunities for their control, are reshaping the operating environment of the retailers. This requires that the retailers’ operation models are developed to match the operating environment, in which the active use of DER plays a major role. Electricity retailers have a clientele, and they operate actively in the electricity markets, which makes them a natural market party to offer new services for end-users aiming at an efficient and market-based use of DER. From the retailer’s point of view, the active use of DER can provide means to adapt the operation to meet the challenges posed by the smart grid environment, and to pursue the ultimate objective of the retailer, which is to maximize the profit of operation. This doctoral dissertation introduces a methodology for the comprehensive use of DER in an electricity retailer’s short-term profit optimization that covers operation in a variety of marketplaces including day-ahead, intra-day, and reserve markets. The analysis results provide data of the key profit-making opportunities and the risks associated with different types of DER use. Therefore, the methodology may serve as an efficient tool for an experienced operator in the planning of the optimal market-based DER use. The key contributions of this doctoral dissertation lie in the analysis and development of the model that allows the retailer to benefit from profit-making opportunities brought by the use of DER in different marketplaces, but also to manage the major risks involved in the active use of DER. In addition, the dissertation introduces an analysis of the economic potential of DER control actions in different marketplaces including the day-ahead Elspot market, balancing power market, and the hourly market of Frequency Containment Reserve for Disturbances (FCR-D).
Resumo:
Fifty-six percent of Canadians, 20 years of age and older, are inactive (Canadian Community Health Survey, 200012001). Research has indicated that one of the most dramatic declines in population physical activity occurs between adolescence and young adulthood (Melina, 2001; Stephens, Jacobs, & White, 1985), a time when individuals this age are entering or attending college or university. Colleges and universities have generally been seen as environments where physical activity and sport can be promoted and accommodated as a result of the available resources and facilities (Archer, Probert, & Gagne, 1987; Suminski, Petosa, Utter, & Zhang, 2002). Intramural sports, one of the most common campus recreational sports options available for post-secondary students, enable students to participate in activities that are suited for different levels of ability and interest (Lewis, Jones, Lamke, & Dunn, 1998). While intramural sports can positively affect the physical activity levels and sport participation rates of post-secondary students, their true value lies in their ability to encourage sport participation after school ends and during the post-school lives of graduates (Forrester, Ross, Geary, & Hall, 2007). This study used the Sport Commitment Model (Scanlan et aI., 1993a) and the Theory of Planned Behaviour (Ajzen, 1991) with post secondary intramural volleyball participants in an effort to examine students' commitment to intramural sport and 1 intentions to participate in intramural sports. More specifically, the research objectives of this study were to: (1.) test the Sport Commitment Model with a sample of postsecondary intramural sport participants(2.) determine the utility of the sixth construct, social support, in explaining the sport commitment of post-secondary intramural sport participants; (3.) determine if there are any significant differences in the six constructs of IV the SCM and sport commitment between: gender, level of competition (competitive A vs. B), and number of different intramural sports played; (4.) determine if there are any significant differences between sport commitment levels and constructs from the Theory of Planned Behaviour (attitudes, subjective norms, perceived behavioural control, and intentions); (5.) determine the relationship between sport commitment and intention to continue participation in intramural volleyball, continue participating in intramurals and continuing participating in sport and physical activity after graduation; and (6.) determine if the level of sport commitment changes the relationship between the constructs from the Theory of Planned Behaviour. Of the 318 surveys distributed, there were 302 partiCipants who completed a usable survey from the sample of post-secondary intramural sport participants. There was a fairly even split of males and females; the average age of the students was twenty-one; 90% were undergraduate students; for approximately 25% of the students, volleyball was the only intramural sport they participated in at Brock and most were part of the volleyball competitive B division. Based on the post-secondary students responses, there are indications of intent to continue participation in sport and physical activity. The participation of the students is predominantly influenced by subjective norms, high sport commitment, and high sport enjoyment. This implies students expect, intend and want to 1 participate in intramurals in the future, they are very dedicated to playing on an intramural team and would be willing to do a lot to keep playing and students want to participate when they perceive their pursuits as enjoyable and fun, and it makes them happy. These are key areas that should be targeted and pursued by sport practitioners.
Resumo:
Affiliation: Institut de recherche en immunologie et en cancérologie, Université de Montréal
Resumo:
Affiliation: Département de Biochimie, Université de Montréal
Resumo:
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Resumo:
L'objectif de cette thèse est de présenter différentes applications du programme de recherche de calcul conditionnel distribué. On espère que ces applications, ainsi que la théorie présentée ici, mènera à une solution générale du problème d'intelligence artificielle, en particulier en ce qui a trait à la nécessité d'efficience. La vision du calcul conditionnel distribué consiste à accélérer l'évaluation et l'entraînement de modèles profonds, ce qui est très différent de l'objectif usuel d'améliorer sa capacité de généralisation et d'optimisation. Le travail présenté ici a des liens étroits avec les modèles de type mélange d'experts. Dans le chapitre 2, nous présentons un nouvel algorithme d'apprentissage profond qui utilise une forme simple d'apprentissage par renforcement sur un modèle d'arbre de décisions à base de réseau de neurones. Nous démontrons la nécessité d'une contrainte d'équilibre pour maintenir la distribution d'exemples aux experts uniforme et empêcher les monopoles. Pour rendre le calcul efficient, l'entrainement et l'évaluation sont contraints à être éparse en utilisant un routeur échantillonnant des experts d'une distribution multinomiale étant donné un exemple. Dans le chapitre 3, nous présentons un nouveau modèle profond constitué d'une représentation éparse divisée en segments d'experts. Un modèle de langue à base de réseau de neurones est construit à partir des transformations éparses entre ces segments. L'opération éparse par bloc est implémentée pour utilisation sur des cartes graphiques. Sa vitesse est comparée à deux opérations denses du même calibre pour démontrer le gain réel de calcul qui peut être obtenu. Un modèle profond utilisant des opérations éparses contrôlées par un routeur distinct des experts est entraîné sur un ensemble de données d'un milliard de mots. Un nouvel algorithme de partitionnement de données est appliqué sur un ensemble de mots pour hiérarchiser la couche de sortie d'un modèle de langage, la rendant ainsi beaucoup plus efficiente. Le travail présenté dans cette thèse est au centre de la vision de calcul conditionnel distribué émis par Yoshua Bengio. Elle tente d'appliquer la recherche dans le domaine des mélanges d'experts aux modèles profonds pour améliorer leur vitesse ainsi que leur capacité d'optimisation. Nous croyons que la théorie et les expériences de cette thèse sont une étape importante sur la voie du calcul conditionnel distribué car elle cadre bien le problème, surtout en ce qui concerne la compétitivité des systèmes d'experts.
Resumo:
Sharing of information with those in need of it has always been an idealistic goal of networked environments. With the proliferation of computer networks, information is so widely distributed among systems, that it is imperative to have well-organized schemes for retrieval and also discovery. This thesis attempts to investigate the problems associated with such schemes and suggests a software architecture, which is aimed towards achieving a meaningful discovery. Usage of information elements as a modelling base for efficient information discovery in distributed systems is demonstrated with the aid of a novel conceptual entity called infotron.The investigations are focused on distributed systems and their associated problems. The study was directed towards identifying suitable software architecture and incorporating the same in an environment where information growth is phenomenal and a proper mechanism for carrying out information discovery becomes feasible. An empirical study undertaken with the aid of an election database of constituencies distributed geographically, provided the insights required. This is manifested in the Election Counting and Reporting Software (ECRS) System. ECRS system is a software system, which is essentially distributed in nature designed to prepare reports to district administrators about the election counting process and to generate other miscellaneous statutory reports.Most of the distributed systems of the nature of ECRS normally will possess a "fragile architecture" which would make them amenable to collapse, with the occurrence of minor faults. This is resolved with the help of the penta-tier architecture proposed, that contained five different technologies at different tiers of the architecture.The results of experiment conducted and its analysis show that such an architecture would help to maintain different components of the software intact in an impermeable manner from any internal or external faults. The architecture thus evolved needed a mechanism to support information processing and discovery. This necessitated the introduction of the noveI concept of infotrons. Further, when a computing machine has to perform any meaningful extraction of information, it is guided by what is termed an infotron dictionary.The other empirical study was to find out which of the two prominent markup languages namely HTML and XML, is best suited for the incorporation of infotrons. A comparative study of 200 documents in HTML and XML was undertaken. The result was in favor ofXML.The concept of infotron and that of infotron dictionary, which were developed, was applied to implement an Information Discovery System (IDS). IDS is essentially, a system, that starts with the infotron(s) supplied as clue(s), and results in brewing the information required to satisfy the need of the information discoverer by utilizing the documents available at its disposal (as information space). The various components of the system and their interaction follows the penta-tier architectural model and therefore can be considered fault-tolerant. IDS is generic in nature and therefore the characteristics and the specifications were drawn up accordingly. Many subsystems interacted with multiple infotron dictionaries that were maintained in the system.In order to demonstrate the working of the IDS and to discover the information without modification of a typical Library Information System (LIS), an Information Discovery in Library Information System (lDLIS) application was developed. IDLIS is essentially a wrapper for the LIS, which maintains all the databases of the library. The purpose was to demonstrate that the functionality of a legacy system could be enhanced with the augmentation of IDS leading to information discovery service. IDLIS demonstrates IDS in action. IDLIS proves that any legacy system could be augmented with IDS effectively to provide the additional functionality of information discovery service.Possible applications of IDS and scope for further research in the field are covered.
Resumo:
Genetic programming is known to provide good solutions for many problems like the evolution of network protocols and distributed algorithms. In such cases it is most likely a hardwired module of a design framework that assists the engineer to optimize specific aspects of the system to be developed. It provides its results in a fixed format through an internal interface. In this paper we show how the utility of genetic programming can be increased remarkably by isolating it as a component and integrating it into the model-driven software development process. Our genetic programming framework produces XMI-encoded UML models that can easily be loaded into widely available modeling tools which in turn posses code generation as well as additional analysis and test capabilities. We use the evolution of a distributed election algorithm as an example to illustrate how genetic programming can be combined with model-driven development. This example clearly illustrates the advantages of our approach – the generation of source code in different programming languages.
Resumo:
Context awareness, dynamic reconfiguration at runtime and heterogeneity are key characteristics of future distributed systems, particularly in ubiquitous and mobile computing scenarios. The main contributions of this dissertation are theoretical as well as architectural concepts facilitating information exchange and fusion in heterogeneous and dynamic distributed environments. Our main focus is on bridging the heterogeneity issues and, at the same time, considering uncertain, imprecise and unreliable sensor information in information fusion and reasoning approaches. A domain ontology is used to establish a common vocabulary for the exchanged information. We thereby explicitly support different representations for the same kind of information and provide Inter-Representation Operations that convert between them. Special account is taken of the conversion of associated meta-data that express uncertainty and impreciseness. The Unscented Transformation, for example, is applied to propagate Gaussian normal distributions across highly non-linear Inter-Representation Operations. Uncertain sensor information is fused using the Dempster-Shafer Theory of Evidence as it allows explicit modelling of partial and complete ignorance. We also show how to incorporate the Dempster-Shafer Theory of Evidence into probabilistic reasoning schemes such as Hidden Markov Models in order to be able to consider the uncertainty of sensor information when deriving high-level information from low-level data. For all these concepts we provide architectural support as a guideline for developers of innovative information exchange and fusion infrastructures that are particularly targeted at heterogeneous dynamic environments. Two case studies serve as proof of concept. The first case study focuses on heterogeneous autonomous robots that have to spontaneously form a cooperative team in order to achieve a common goal. The second case study is concerned with an approach for user activity recognition which serves as baseline for a context-aware adaptive application. Both case studies demonstrate the viability and strengths of the proposed solution and emphasize that the Dempster-Shafer Theory of Evidence should be preferred to pure probability theory in applications involving non-linear Inter-Representation Operations.
Resumo:
In den letzten Jahrzehnten haben sich makroskalige hydrologische Modelle als wichtige Werkzeuge etabliert um den Zustand der globalen erneuerbaren Süßwasserressourcen flächendeckend bewerten können. Sie werden heutzutage eingesetzt um eine große Bandbreite wissenschaftlicher Fragestellungen zu beantworten, insbesondere hinsichtlich der Auswirkungen anthropogener Einflüsse auf das natürliche Abflussregime oder der Auswirkungen des globalen Wandels und Klimawandels auf die Ressource Wasser. Diese Auswirkungen lassen sich durch verschiedenste wasserbezogene Kenngrößen abschätzen, wie z.B. erneuerbare (Grund-)Wasserressourcen, Hochwasserrisiko, Dürren, Wasserstress und Wasserknappheit. Die Weiterentwicklung makroskaliger hydrologischer Modelle wurde insbesondere durch stetig steigende Rechenkapazitäten begünstigt, aber auch durch die zunehmende Verfügbarkeit von Fernerkundungsdaten und abgeleiteten Datenprodukten, die genutzt werden können, um die Modelle anzutreiben und zu verbessern. Wie alle makro- bis globalskaligen Modellierungsansätze unterliegen makroskalige hydrologische Simulationen erheblichen Unsicherheiten, die (i) auf räumliche Eingabedatensätze, wie z.B. meteorologische Größen oder Landoberflächenparameter, und (ii) im Besonderen auf die (oftmals) vereinfachte Abbildung physikalischer Prozesse im Modell zurückzuführen sind. Angesichts dieser Unsicherheiten ist es unabdingbar, die tatsächliche Anwendbarkeit und Prognosefähigkeit der Modelle unter diversen klimatischen und physiographischen Bedingungen zu überprüfen. Bisher wurden die meisten Evaluierungsstudien jedoch lediglich in wenigen, großen Flusseinzugsgebieten durchgeführt oder fokussierten auf kontinentalen Wasserflüssen. Dies steht im Kontrast zu vielen Anwendungsstudien, deren Analysen und Aussagen auf simulierten Zustandsgrößen und Flüssen in deutlich feinerer räumlicher Auflösung (Gridzelle) basieren. Den Kern der Dissertation bildet eine umfangreiche Evaluierung der generellen Anwendbarkeit des globalen hydrologischen Modells WaterGAP3 für die Simulation von monatlichen Abflussregimen und Niedrig- und Hochwasserabflüssen auf Basis von mehr als 2400 Durchflussmessreihen für den Zeitraum 1958-2010. Die betrachteten Flusseinzugsgebiete repräsentieren ein breites Spektrum klimatischer und physiographischer Bedingungen, die Einzugsgebietsgröße reicht von 3000 bis zu mehreren Millionen Quadratkilometern. Die Modellevaluierung hat dabei zwei Zielsetzungen: Erstens soll die erzielte Modellgüte als Bezugswert dienen gegen den jegliche weiteren Modellverbesserungen verglichen werden können. Zweitens soll eine Methode zur diagnostischen Modellevaluierung entwickelt und getestet werden, die eindeutige Ansatzpunkte zur Modellverbesserung aufzeigen soll, falls die Modellgüte unzureichend ist. Hierzu werden komplementäre Modellgütemaße mit neun Gebietsparametern verknüpft, welche die klimatischen und physiographischen Bedingungen sowie den Grad anthropogener Beeinflussung in den einzelnen Einzugsgebieten quantifizieren. WaterGAP3 erzielt eine mittlere bis hohe Modellgüte für die Simulation von sowohl monatlichen Abflussregimen als auch Niedrig- und Hochwasserabflüssen, jedoch sind für alle betrachteten Modellgütemaße deutliche räumliche Muster erkennbar. Von den neun betrachteten Gebietseigenschaften weisen insbesondere der Ariditätsgrad und die mittlere Gebietsneigung einen starken Einfluss auf die Modellgüte auf. Das Modell tendiert zur Überschätzung des jährlichen Abflussvolumens mit steigender Aridität. Dieses Verhalten ist charakteristisch für makroskalige hydrologische Modelle und ist auf die unzureichende Abbildung von Prozessen der Abflussbildung und –konzentration in wasserlimitierten Gebieten zurückzuführen. In steilen Einzugsgebieten wird eine geringe Modellgüte hinsichtlich der Abbildung von monatlicher Abflussvariabilität und zeitlicher Dynamik festgestellt, die sich auch in der Güte der Niedrig- und Hochwassersimulation widerspiegelt. Diese Beobachtung weist auf notwendige Modellverbesserungen in Bezug auf (i) die Aufteilung des Gesamtabflusses in schnelle und verzögerte Abflusskomponente und (ii) die Berechnung der Fließgeschwindigkeit im Gerinne hin. Die im Rahmen der Dissertation entwickelte Methode zur diagnostischen Modellevaluierung durch Verknüpfung von komplementären Modellgütemaßen und Einzugsgebietseigenschaften wurde exemplarisch am Beispiel des WaterGAP3 Modells erprobt. Die Methode hat sich als effizientes Werkzeug erwiesen, um räumliche Muster in der Modellgüte zu erklären und Defizite in der Modellstruktur zu identifizieren. Die entwickelte Methode ist generell für jedes hydrologische Modell anwendbar. Sie ist jedoch insbesondere für makroskalige Modelle und multi-basin Studien relevant, da sie das Fehlen von feldspezifischen Kenntnissen und gezielten Messkampagnen, auf die üblicherweise in der Einzugsgebietsmodellierung zurückgegriffen wird, teilweise ausgleichen kann.
Resumo:
Linear graph reduction is a simple computational model in which the cost of naming things is explicitly represented. The key idea is the notion of "linearity". A name is linear if it is only used once, so with linear naming you cannot create more than one outstanding reference to an entity. As a result, linear naming is cheap to support and easy to reason about. Programs can be translated into the linear graph reduction model such that linear names in the program are implemented directly as linear names in the model. Nonlinear names are supported by constructing them out of linear names. The translation thus exposes those places where the program uses names in expensive, nonlinear ways. Two applications demonstrate the utility of using linear graph reduction: First, in the area of distributed computing, linear naming makes it easy to support cheap cross-network references and highly portable data structures, Linear naming also facilitates demand driven migration of tasks and data around the network without requiring explicit guidance from the programmer. Second, linear graph reduction reveals a new characterization of the phenomenon of state. Systems in which state appears are those which depend on certain -global- system properties. State is not a localizable phenomenon, which suggests that our usual object oriented metaphor for state is flawed.
Resumo:
Research on autonomous intelligent systems has focused on how robots can robustly carry out missions in uncertain and harsh environments with very little or no human intervention. Robotic execution languages such as RAPs, ESL, and TDL improve robustness by managing functionally redundant procedures for achieving goals. The model-based programming approach extends this by guaranteeing correctness of execution through pre-planning of non-deterministic timed threads of activities. Executing model-based programs effectively on distributed autonomous platforms requires distributing this pre-planning process. This thesis presents a distributed planner for modelbased programs whose planning and execution is distributed among agents with widely varying levels of processor power and memory resources. We make two key contributions. First, we reformulate a model-based program, which describes cooperative activities, into a hierarchical dynamic simple temporal network. This enables efficient distributed coordination of robots and supports deployment on heterogeneous robots. Second, we introduce a distributed temporal planner, called DTP, which solves hierarchical dynamic simple temporal networks with the assistance of the distributed Bellman-Ford shortest path algorithm. The implementation of DTP has been demonstrated successfully on a wide range of randomly generated examples and on a pursuer-evader challenge problem in simulation.
Resumo:
This work extends a previously developed research concerning about the use of local model predictive control in differential driven mobile robots. Hence, experimental results are presented as a way to improve the methodology by considering aspects as trajectory accuracy and time performance. In this sense, the cost function and the prediction horizon are important aspects to be considered. The aim of the present work is to test the control method by measuring trajectory tracking accuracy and time performance. Moreover, strategies for the integration with perception system and path planning are briefly introduced. In this sense, monocular image data can be used to plan safety trajectories by using goal attraction potential fields
Resumo:
Wednesday 2nd April 2014 Speaker(s): Stefan Decker Time: 02/04/2014 11:00-11:50 Location: B2/1083 File size: 897 Mb Abstract Ontologies have been promoted and used for knowledge sharing. Several models for representing ontologies have been developed in the Knowledge Representation field, in particular associated with the Semantic Web. In my talk I will summarise developments so far, and will argue that the currently advocated approaches miss certain basic properties of current distributed information sharing infrastructures (read: the Web and the Internet). I will sketch an alternative model aiming to support knowledge sharing and re-use on a global basis.