813 resultados para Constraint based modelling
Resumo:
The energy consumption of IT equipments is becoming an issue of increasing importance. In particular, network equipments such as routers and switches are major contributors to the energy consumption of internet. Therefore it is important to understand how the relationship between input parameters such as bandwidth, number of active ports, traffic-load, hibernation-mode and their impact on energy consumption of a switch. In this paper, the energy consumption of a switch is analyzed in extensive experiments. A fuzzy rule-based model of energy consumption of a switch is proposed based on the result of experiments. The model can be used to predict the energy saving when deploying new switches by controlling the parameters to achieve desired energy consumption and subsequent performance. Furthermore, the model can also be used for further researches on energy saving techniques such as energy-efficient routing protocol, dynamic link shutdown, etc.
Resumo:
Nykypäivän monimutkaisessa ja epävakaassa liiketoimintaympäristössä yritykset, jotka kykenevät muuttamaan tuottamansa operatiivisen datan tietovarastoiksi, voivat saavuttaa merkittävää kilpailuetua. Ennustavan analytiikan hyödyntäminen tulevien trendien ennakointiin mahdollistaa yritysten tunnistavan avaintekijöitä, joiden avulla he pystyvät erottumaan kilpailijoistaan. Ennustavan analytiikan hyödyntäminen osana päätöksentekoprosessia mahdollistaa ketterämmän, reaaliaikaisen päätöksenteon. Tämän diplomityön tarkoituksena on koota teoreettinen viitekehys analytiikan mallintamisesta liike-elämän loppukäyttäjän näkökulmasta ja hyödyntää tätä mallinnusprosessia diplomityön tapaustutkimuksen yritykseen. Teoreettista mallia hyödynnettiin asiakkuuksien mallintamisessa sekä tunnistamalla ennakoivia tekijöitä myynnin ennustamiseen. Työ suoritettiin suomalaiseen teollisten suodattimien tukkukauppaan, jolla on liiketoimintaa Suomessa, Venäjällä ja Balteissa. Tämä tutkimus on määrällinen tapaustutkimus, jossa tärkeimpänä tiedonkeruumenetelmänä käytettiin tapausyrityksen transaktiodataa. Data työhön saatiin yrityksen toiminnanohjausjärjestelmästä.
Resumo:
Human beings have always strived to preserve their memories and spread their ideas. In the beginning this was always done through human interpretations, such as telling stories and creating sculptures. Later, technological progress made it possible to create a recording of a phenomenon; first as an analogue recording onto a physical object, and later digitally, as a sequence of bits to be interpreted by a computer. By the end of the 20th century technological advances had made it feasible to distribute media content over a computer network instead of on physical objects, thus enabling the concept of digital media distribution. Many digital media distribution systems already exist, and their continued, and in many cases increasing, usage is an indicator for the high interest in their future enhancements and enriching. By looking at these digital media distribution systems, we have identified three main areas of possible improvement: network structure and coordination, transport of content over the network, and the encoding used for the content. In this thesis, our aim is to show that improvements in performance, efficiency and availability can be done in conjunction with improvements in software quality and reliability through the use of formal methods: mathematical approaches to reasoning about software so that we can prove its correctness, together with the desirable properties. We envision a complete media distribution system based on a distributed architecture, such as peer-to-peer networking, in which different parts of the system have been formally modelled and verified. Starting with the network itself, we show how it can be formally constructed and modularised in the Event-B formalism, such that we can separate the modelling of one node from the modelling of the network itself. We also show how the piece selection algorithm in the BitTorrent peer-to-peer transfer protocol can be adapted for on-demand media streaming, and how this can be modelled in Event-B. Furthermore, we show how modelling one peer in Event-B can give results similar to simulating an entire network of peers. Going further, we introduce a formal specification language for content transfer algorithms, and show that having such a language can make these algorithms easier to understand. We also show how generating Event-B code from this language can result in less complexity compared to creating the models from written specifications. We also consider the decoding part of a media distribution system by showing how video decoding can be done in parallel. This is based on formally defined dependencies between frames and blocks in a video sequence; we have shown that also this step can be performed in a way that is mathematically proven correct. Our modelling and proving in this thesis is, in its majority, tool-based. This provides a demonstration of the advance of formal methods as well as their increased reliability, and thus, advocates for their more wide-spread usage in the future.
Resumo:
IoT consists of essentially thousands of tiny sensor nodes interconnected to the internet, each one of which executes the programmed functions under memory and power limita- tions. The sensor nodes are distributed mainly for gathering data in various situations. IoT envisions the future technologies such as e-health, smart city, auto-mobiles automa- tion, construction sites automation, and smart home. Secure communication of data under memory and energy constraints is major challenge in IoT. Authentication is the first and important phase of secure communication. This study presents a protocol to authenticate resource constraint devices in physical proximity by solely using the shared wireless communication interfaces. This model of authentication only relies on the abundance of ambient radio signals to authenticate in less than a second. To evaluate the designed protocol, SkyMotes are emulated in a network environment simulated by Contiki/COOJA. Results presented during this study proves that this approach is immune against passive and active attacks. An adversary located as near as two meters can be identified in less than a second with minimal expense of energy. Since, only radio device is used as required hardware for the authentication, this technique is scalable and interoperable to heterogeneous nature of IoT.
Resumo:
The electricity distribution sector will face significant changes in the future. Increasing reliability demands will call for major network investments. At the same time, electricity end-use is undergoing profound changes. The changes include future energy technologies and other advances in the field. New technologies such as microgeneration and electric vehicles will have different kinds of impacts on electricity distribution network loads. In addition, smart metering provides more accurate electricity consumption data and opportunities to develop sophisticated load modelling and forecasting approaches. Thus, there are both demands and opportunities to develop a new type of long-term forecasting methodology for electricity distribution. The work concentrates on the technical and economic perspectives of electricity distribution. The doctoral dissertation proposes a methodology to forecast electricity consumption in the distribution networks. The forecasting process consists of a spatial analysis, clustering, end-use modelling, scenarios and simulation methods, and the load forecasts are based on the application of automatic meter reading (AMR) data. The developed long-term forecasting process produces power-based load forecasts. By applying these results, it is possible to forecast the impacts of changes on electrical energy in the network, and further, on the distribution system operator’s revenue. These results are applicable to distribution network and business planning. This doctoral dissertation includes a case study, which tests the forecasting process in practice. For the case study, the most prominent future energy technologies are chosen, and their impacts on the electrical energy and power on the network are analysed. The most relevant topics related to changes in the operating environment, namely energy efficiency, microgeneration, electric vehicles, energy storages and demand response, are discussed in more detail. The study shows that changes in electricity end-use may have radical impacts both on electrical energy and power in the distribution networks and on the distribution revenue. These changes will probably pose challenges for distribution system operators. The study suggests solutions for the distribution system operators on how they can prepare for the changing conditions. It is concluded that a new type of load forecasting methodology is needed, because the previous methods are no longer able to produce adequate forecasts.
Resumo:
The aim of this study was to contribute to the current knowledge-based theory by focusing on a research gap that exists in the empirically proven determination of the simultaneous but differentiable effects of intellectual capital (IC) assets and knowledge management (KM) practices on organisational performance (OP). The analysis was built on the past research and theoreticised interactions between the latent constructs specified using the survey-based items that were measured from a sample of Finnish companies for IC and KM and the dependent construct for OP determined using information available from financial databases. Two widely used and commonly recommended measures in the literature on management science, i.e. the return on total assets (ROA) and the return on equity (ROE), were calculated for OP. Thus the investigation of the relationship between IC and KM impacting OP in relation to the hypotheses founded was possible to conduct using objectively derived performance indicators. Using financial OP measures also strengthened the dynamic features of data needed in analysing simultaneous and causal dependences between the modelled constructs specified using structural path models. The estimates were obtained for the parameters of structural path models using a partial least squares-based regression estimator. Results showed that the path dependencies between IC and OP or KM and OP were always insignificant when analysed separate to any other interactions or indirect effects caused by simultaneous modelling and regardless of the OP measure used that was either ROA or ROE. The dependency between the constructs for KM and IC appeared to be very strong and was always significant when modelled simultaneously with other possible interactions between the constructs and using either ROA or ROE to define OP. This study, however, did not find statistically unambiguous evidence for proving the hypothesised causal mediation effects suggesting, for instance, that the effects of KM practices on OP are mediated by the IC assets. Due to the fact that some indication about the fluctuations of causal effects was assessed, it was concluded that further studies are needed for verifying the fundamental and likely hidden causal effects between the constructs of interest. Therefore, it was also recommended that complementary modelling and data processing measures be conducted for elucidating whether the mediation effects occur between IC, KM and OP, the verification of which requires further investigations of measured items and can be build on the findings of this study.
Resumo:
Increasing amount of renewable energy source based electricity production has set high load control requirements for power grid balance markets. The essential grid balance between electricity consumption and generation is currently hard to achieve economically with new-generation solutions. Therefore conventional combustion power generation will be examined in this thesis as a solution to the foregoing issue. Circulating fluidized bed (CFB) technology is known to have sufficient scale to acts as a large grid balancing unit. Although the load change rate of the CFB unit is known to be moderately high, supplementary repowering solution will be evaluated in this thesis for load change maximization. The repowering heat duty is delivered to the CFB feed water preheating section by smaller gas turbine (GT) unit. Consequently, steam extraction preheating may be decreased and large amount of the gas turbine exhaust heat may be utilized in the CFB process to reach maximum plant electrical efficiency. Earlier study of the repowering has focused on the efficiency improvements and retrofitting to maximize plant electrical output. This study however presents the CFB load change improvement possibilities achieved with supplementary GT heat. The repowering study is prefaced with literature and theory review for both of the processes to maximize accuracy of the research. Both dynamic and steady-state simulations accomplished with APROS simulation tool will be used to evaluate repowering effects to the CFB unit operation. Eventually, a conceptual level analysis is completed to compare repowered plant performance to the state-of-the-art CFB performance. Based on the performed simulations, considerably good improvements to the CFB process parameters are achieved with repowering. Consequently, the results show possibilities to higher ramp rate values achieved with repowered CFB technology. This enables better plant suitability to the grid balance markets.
Resumo:
Power line modelling has become an interesting research area in recent years as a result of advances in the power line distribution network system. Extensive knowledge about the power line cable characteristics can be implemented in a software algorithm in a modern broadband power-line communication modem. In this study, a novel approach for modelling power line cables (AMCMK) based on the broadband impedance spectroscopy (BIS) and transmission line matrix (TLM) techniques is recommended in characterizing a healthy cable and the various faults associated with low-voltage cables for both open and short circuit situation. Models for different cable conditions are developed and tuned, which include six models for both healthy and faulty cables situations. The models are on the basis of impedance response analysis of the cable. The resulting spectra from the simulations are also cross-correlated to determine the degree of similarities between the healthy cable spectra and their respective faulty spectra.
Resumo:
Our surrounding landscape is in a constantly dynamic state, but recently the rate of changes and their effects on the environment have considerably increased. In terms of the impact on nature, this development has not been entirely positive, but has rather caused a decline in valuable species, habitats, and general biodiversity. Regardless of recognizing the problem and its high importance, plans and actions of how to stop the detrimental development are largely lacking. This partly originates from a lack of genuine will, but is also due to difficulties in detecting many valuable landscape components and their consequent neglect. To support knowledge extraction, various digital environmental data sources may be of substantial help, but only if all the relevant background factors are known and the data is processed in a suitable way. This dissertation concentrates on detecting ecologically valuable landscape components by using geospatial data sources, and applies this knowledge to support spatial planning and management activities. In other words, the focus is on observing regionally valuable species, habitats, and biotopes with GIS and remote sensing data, using suitable methods for their analysis. Primary emphasis is given to the hemiboreal vegetation zone and the drastic decline in its semi-natural grasslands, which were created by a long trajectory of traditional grazing and management activities. However, the applied perspective is largely methodological, and allows for the application of the obtained results in various contexts. Models based on statistical dependencies and correlations of multiple variables, which are able to extract desired properties from a large mass of initial data, are emphasized in the dissertation. In addition, the papers included combine several data sets from different sources and dates together, with the aim of detecting a wider range of environmental characteristics, as well as pointing out their temporal dynamics. The results of the dissertation emphasise the multidimensionality and dynamics of landscapes, which need to be understood in order to be able to recognise their ecologically valuable components. This not only requires knowledge about the emergence of these components and an understanding of the used data, but also the need to focus the observations on minute details that are able to indicate the existence of fragmented and partly overlapping landscape targets. In addition, this pinpoints the fact that most of the existing classifications are too generalised as such to provide all the required details, but they can be utilized at various steps along a longer processing chain. The dissertation also emphases the importance of landscape history as an important factor, which both creates and preserves ecological values, and which sets an essential standpoint for understanding the present landscape characteristics. The obtained results are significant both in terms of preserving semi-natural grasslands, as well as general methodological development, giving support to science-based framework in order to evaluate ecological values and guide spatial planning.
Resumo:
Abstract The aim of this research project is to draw on accounts of experiences ofborder crossing and regulation at the Canada/U.S. border at Niagara in order to illuminate the dynamics of differentiation and inequality at this site. The research is informed by claims that the world is turning into a global village due to transnational flows oftechnology, infonnation, capital and people. Much of the available literature on globalization shows that while the transfer of technology, information, and capital are enhanced, the transnational movement of people is both facilitated and constrained in complex and unequal ways. In this project, the workings of facilitation and constraint were explored through an analysis often interviews with people who had spent a substantial portion oftheir childhood (e.g. 5 years) in a Canadian border community. The interviewees were at the time ofthe research between the ages of 19 and 25. Because most ofthe respondents were 'white' Canadians of working to upper middle class status, my focus was to explore how 'whiteness' as privilege may translate into enhanced movement across borders and how 'white' people may internalize and enjoy this privilege but may often deny its reality. I was also interested in how inequality is perceived, understood, and legitimated by these relatively privileged people. My analysis ofthe ten accounts ofborder crossing and regulation suggests that differentially situated people experience border crossing differently. An important finding is that while relatively privileged border crossers perceived and often problernatized differential treatment based on external factors such as physical appearance, and especially race, most did not challenge such treatment but rather saw it as acceptable. These findings are located within newer literature that addresses the increasing securitization ofborders and migration in western societies.
Resumo:
Qualitative spatial reasoning (QSR) is an important field of AI that deals with qualitative aspects of spatial entities. Regions and their relationships are described in qualitative terms instead of numerical values. This approach models human based reasoning about such entities closer than other approaches. Any relationships between regions that we encounter in our daily life situations are normally formulated in natural language. For example, one can outline one's room plan to an expert by indicating which rooms should be connected to each other. Mereotopology as an area of QSR combines mereology, topology and algebraic methods. As mereotopology plays an important role in region based theories of space, our focus is on one of the most widely referenced formalisms for QSR, the region connection calculus (RCC). RCC is a first order theory based on a primitive connectedness relation, which is a binary symmetric relation satisfying some additional properties. By using this relation we can define a set of basic binary relations which have the property of being jointly exhaustive and pairwise disjoint (JEPD), which means that between any two spatial entities exactly one of the basic relations hold. Basic reasoning can now be done by using the composition operation on relations whose results are stored in a composition table. Relation algebras (RAs) have become a main entity for spatial reasoning in the area of QSR. These algebras are based on equational reasoning which can be used to derive further relations between regions in a certain situation. Any of those algebras describe the relation between regions up to a certain degree of detail. In this thesis we will use the method of splitting atoms in a RA in order to reproduce known algebras such as RCC15 and RCC25 systematically and to generate new algebras, and hence a more detailed description of regions, beyond RCC25.
Resumo:
Les sociétés modernes dépendent de plus en plus sur les systèmes informatiques et ainsi, il y a de plus en plus de pression sur les équipes de développement pour produire des logiciels de bonne qualité. Plusieurs compagnies utilisent des modèles de qualité, des suites de programmes qui analysent et évaluent la qualité d'autres programmes, mais la construction de modèles de qualité est difficile parce qu'il existe plusieurs questions qui n'ont pas été répondues dans la littérature. Nous avons étudié les pratiques de modélisation de la qualité auprès d'une grande entreprise et avons identifié les trois dimensions où une recherche additionnelle est désirable : Le support de la subjectivité de la qualité, les techniques pour faire le suivi de la qualité lors de l'évolution des logiciels, et la composition de la qualité entre différents niveaux d'abstraction. Concernant la subjectivité, nous avons proposé l'utilisation de modèles bayésiens parce qu'ils sont capables de traiter des données ambiguës. Nous avons appliqué nos modèles au problème de la détection des défauts de conception. Dans une étude de deux logiciels libres, nous avons trouvé que notre approche est supérieure aux techniques décrites dans l'état de l'art, qui sont basées sur des règles. Pour supporter l'évolution des logiciels, nous avons considéré que les scores produits par un modèle de qualité sont des signaux qui peuvent être analysés en utilisant des techniques d'exploration de données pour identifier des patrons d'évolution de la qualité. Nous avons étudié comment les défauts de conception apparaissent et disparaissent des logiciels. Un logiciel est typiquement conçu comme une hiérarchie de composants, mais les modèles de qualité ne tiennent pas compte de cette organisation. Dans la dernière partie de la dissertation, nous présentons un modèle de qualité à deux niveaux. Ces modèles ont trois parties: un modèle au niveau du composant, un modèle qui évalue l'importance de chacun des composants, et un autre qui évalue la qualité d'un composé en combinant la qualité de ses composants. L'approche a été testée sur la prédiction de classes à fort changement à partir de la qualité des méthodes. Nous avons trouvé que nos modèles à deux niveaux permettent une meilleure identification des classes à fort changement. Pour terminer, nous avons appliqué nos modèles à deux niveaux pour l'évaluation de la navigabilité des sites web à partir de la qualité des pages. Nos modèles étaient capables de distinguer entre des sites de très bonne qualité et des sites choisis aléatoirement. Au cours de la dissertation, nous présentons non seulement des problèmes théoriques et leurs solutions, mais nous avons également mené des expériences pour démontrer les avantages et les limitations de nos solutions. Nos résultats indiquent qu'on peut espérer améliorer l'état de l'art dans les trois dimensions présentées. En particulier, notre travail sur la composition de la qualité et la modélisation de l'importance est le premier à cibler ce problème. Nous croyons que nos modèles à deux niveaux sont un point de départ intéressant pour des travaux de recherche plus approfondis.
Resumo:
The thesis deals with some of the non-linear Gaussian and non-Gaussian time models and mainly concentrated in studying the properties and application of a first order autoregressive process with Cauchy marginal distribution. In this thesis some of the non-linear Gaussian and non-Gaussian time series models and mainly concentrated in studying the properties and application of a order autoregressive process with Cauchy marginal distribution. Time series relating to prices, consumptions, money in circulation, bank deposits and bank clearing, sales and profit in a departmental store, national income and foreign exchange reserves, prices and dividend of shares in a stock exchange etc. are examples of economic and business time series. The thesis discuses the application of a threshold autoregressive(TAR) model, try to fit this model to a time series data. Another important non-linear model is the ARCH model, and the third model is the TARCH model. The main objective here is to identify an appropriate model to a given set of data. The data considered are the daily coconut oil prices for a period of three years. Since it is a price data the consecutive prices may not be independent and hence a time series based model is more appropriate. In this study the properties like ergodicity, mixing property and time reversibility and also various estimation procedures used to estimate the unknown parameters of the process.
Resumo:
The current study is aimed at the development of a theoretical simulation tool based on Discrete Element Method (DEM) to 'interpret granular dynamics of solid bed in the cross section of the horizontal rotating cylinder at the microscopic level and subsequently apply this model to establish the transition behaviour, mixing and segregation.The simulation of the granular motion developed in this work is based on solving Newton's equation of motion for each particle in the granular bed subjected to the collisional forces, external forces and boundary forces. At every instant of time, the forces are tracked and the positions velocities and accelarations of each partcle is The software code for this simulation is written in VISUAL FORTRAN 90 After checking the validity of the code with special tests, it is used to investigate the transition behaviour of granular solids motion in the cross section of a rotating cylinder for various rotational speeds and fill fraction.This work is hence directed towards a theoretical investigation based on Discrete Element Method (DEM) of the motion of granular solids in the radial direction of the horizontal cylinder to elucidate the relationship between the operating parameters of the rotating cylinder geometry and physical properties ofthe granular solid.The operating parameters of the rotating cylinder include the various rotational velocities of the cylinder and volumetric fill. The physical properties of the granular solids include particle sizes, densities, stiffness coefficients, and coefficient of friction Further the work highlights the fundamental basis for the important phenomena of the system namely; (i) the different modes of solids motion observed in a transverse crosssection of the rotating cylinder for various rotational speeds, (ii) the radial mixing of the granular solid in terms of active layer depth (iii) rate coefficient of mixing as well as the transition behaviour in terms of the bed turnover time and rotational speed and (iv) the segregation mechanisms resulting from differences in the size and density of particles.The transition behaviour involving its six different modes of motion of the granular solid bed is quantified in terms of Froude number and the results obtained are validated with experimental and theoretical results reported in the literature The transition from slumping to rolling mode is quantified using the bed turnover time and a linear relationship is established between the bed turn over time and the inverse of the rotational speed of the cylinder as predicted by Davidson et al. [2000]. The effect of the rotational speed, fill fraction and coefficient of friction on the dynamic angle of repose are presented and discussed. The variation of active layer depth with respect to fill fraction and rotational speed have been investigated. The results obtained through simulation are compared with the experimental results reported by Van Puyvelde et. at. [2000] and Ding et at. [2002].The theoretical model has been further extended, to study the rmxmg and segregation in the transverse direction for different particle sizes and their size ratios. The effect of fill fraction and rotational speed on the transverse mixing behaviour is presented in the form of a mixing index and mixing kinetics curve. The segregation pattern obtained by the simulation of the granular solid bed with respect to the rotational speed of the cylinder is presented both in graphical and numerical forms. The segregation behaviour of the granular solid bed with respect to particle size, density and volume fraction of particle size has been investigated. Several important macro parameters characterising segregation such as mixing index, percolation index and segregation index have been derived from the simulation tool based on first principles developed in this work.
Resumo:
The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identitication and quantification of the hazards associated with chemical industries. This research work presents the results of a consequence analysis carried out to assess the damage potential of the hazardous material storages in an industrial area of central Kerala, India. A survey carried out in the major accident hazard (MAH) units in the industrial belt revealed that the major hazardous chemicals stored by the various industrial units are ammonia, chlorine, benzene, naphtha, cyclohexane, cyclohexanone and LPG. The damage potential of the above chemicals is assessed using consequence modelling. Modelling of pool fires for naphtha, cyclohexane, cyclohexanone, benzene and ammonia are carried out using TNO model. Vapor cloud explosion (VCE) modelling of LPG, cyclohexane and benzene are carried out using TNT equivalent model. Boiling liquid expanding vapor explosion (BLEVE) modelling of LPG is also carried out. Dispersion modelling of toxic chemicals like chlorine, ammonia and benzene is carried out using the ALOHA air quality model. Threat zones for different hazardous storages are estimated based on the consequence modelling. The distance covered by the threat zone was found to be maximum for chlorine release from a chlor-alkali industry located in the area. The results of consequence modelling are useful for the estimation of individual risk and societal risk in the above industrial area.Vulnerability assessment is carried out using probit functions for toxic, thermal and pressure loads. Individual and societal risks are also estimated at different locations. Mapping of threat zones due to different incident outcome cases from different MAH industries is done with the help of Are GIS.Fault Tree Analysis (FTA) is an established technique for hazard evaluation. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. However it is often difficult to estimate precisely the failure probability of the components due to insufficient data or vague characteristics of the basic event. It has been reported that availability of the failure probability data pertaining to local conditions is surprisingly limited in India. This thesis outlines the generation of failure probability values of the basic events that lead to the release of chlorine from the storage and filling facility of a major chlor-alkali industry located in the area using expert elicitation and proven fuzzy logic. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor invo1ved in expert elicitation .