866 resultados para switched dynamic systems
Resumo:
The Science of Network Service Composition has clearly emerged as one of the grand themes driving many of our research questions in the networking field today [NeXtworking 2003]. This driving force stems from the rise of sophisticated applications and new networking paradigms. By "service composition" we mean that the performance and correctness properties local to the various constituent components of a service can be readily composed into global (end-to-end) properties without re-analyzing any of the constituent components in isolation, or as part of the whole composite service. The set of laws that would govern such composition is what will constitute that new science of composition. The combined heterogeneity and dynamic open nature of network systems makes composition quite challenging, and thus programming network services has been largely inaccessible to the average user. We identify (and outline) a research agenda in which we aim to develop a specification language that is expressive enough to describe different components of a network service, and that will include type hierarchies inspired by type systems in general programming languages that enable the safe composition of software components. We envision this new science of composition to be built upon several theories (e.g., control theory, game theory, network calculus, percolation theory, economics, queuing theory). In essence, different theories may provide different languages by which certain properties of system components can be expressed and composed into larger systems. We then seek to lift these lower-level specifications to a higher level by abstracting away details that are irrelevant for safe composition at the higher level, thus making theories scalable and useful to the average user. In this paper we focus on services built upon an overlay management architecture, and we use control theory and QoS theory as example theories from which we lift up compositional specifications.
Resumo:
The quality of available network connections can often have a large impact on the performance of distributed applications. For example, document transfer applications such as FTP, Gopher and the World Wide Web suffer increased response times as a result of network congestion. For these applications, the document transfer time is directly related to the available bandwidth of the connection. Available bandwidth depends on two things: 1) the underlying capacity of the path from client to server, which is limited by the bottleneck link; and 2) the amount of other traffic competing for links on the path. If measurements of these quantities were available to the application, the current utilization of connections could be calculated. Network utilization could then be used as a basis for selection from a set of alternative connections or servers, thus providing reduced response time. Such a dynamic server selection scheme would be especially important in a mobile computing environment in which the set of available servers is frequently changing. In order to provide these measurements at the application level, we introduce two tools: bprobe, which provides an estimate of the uncongested bandwidth of a path; and cprobe, which gives an estimate of the current congestion along a path. These two measures may be used in combination to provide the application with an estimate of available bandwidth between server and client thereby enabling application-level congestion avoidance. In this paper we discuss the design and implementation of our probe tools, specifically illustrating the techniques used to achieve accuracy and robustness. We present validation studies for both tools which demonstrate their reliability in the face of actual Internet conditions; and we give results of a survey of available bandwidth to a random set of WWW servers as a sample application of our probe technique. We conclude with descriptions of other applications of our measurement tools, several of which are currently under development.
Resumo:
Load balancing is often used to ensure that nodes in a distributed systems are equally loaded. In this paper, we show that for real-time systems, load balancing is not desirable. In particular, we propose a new load-profiling strategy that allows the nodes of a distributed system to be unequally loaded. Using load profiling, the system attempts to distribute the load amongst its nodes so as to maximize the chances of finding a node that would satisfy the computational needs of incoming real-time tasks. To that end, we describe and evaluate a distributed load-profiling protocol for dynamically scheduling time-constrained tasks in a loosely-coupled distributed environment. When a task is submitted to a node, the scheduling software tries to schedule the task locally so as to meet its deadline. If that is not feasible, it tries to locate another node where this could be done with a high probability of success, while attempting to maintain an overall load profile for the system. Nodes in the system inform each other about their state using a combination of multicasting and gossiping. The performance of the proposed protocol is evaluated via simulation, and is contrasted to other dynamic scheduling protocols for real-time distributed systems. Based on our findings, we argue that keeping a diverse availability profile and using passive bidding (through gossiping) are both advantageous to distributed scheduling for real-time systems.
Resumo:
Dynamic service aggregation techniques can exploit skewed access popularity patterns to reduce the costs of building interactive VoD systems. These schemes seek to cluster and merge users into single streams by bridging the temporal skew between them, thus improving server and network utilization. Rate adaptation and secondary content insertion are two such schemes. In this paper, we present and evaluate an optimal scheduling algorithm for inserting secondary content in this scenario. The algorithm runs in polynomial time, and is optimal with respect to the total bandwidth usage over the merging interval. We present constraints on content insertion which make the overall QoS of the delivered stream acceptable, and show how our algorithm can satisfy these constraints. We report simulation results which quantify the excellent gains due to content insertion. We discuss dynamic scenarios with user arrivals and interactions, and show that content insertion reduces the channel bandwidth requirement to almost half. We also discuss differentiated service techniques, such as N-VoD and premium no-advertisement service, and show how our algorithm can support these as well.
Resumo:
In outsourced database (ODB) systems the database owner publishes its data through a number of remote servers, with the goal of enabling clients at the edge of the network to access and query the data more efficiently. As servers might be untrusted or can be compromised, query authentication becomes an essential component of ODB systems. Existing solutions for this problem concentrate mostly on static scenarios and are based on idealistic properties for certain cryptographic primitives. In this work, first we define a variety of essential and practical cost metrics associated with ODB systems. Then, we analytically evaluate a number of different approaches, in search for a solution that best leverages all metrics. Most importantly, we look at solutions that can handle dynamic scenarios, where owners periodically update the data residing at the servers. Finally, we discuss query freshness, a new dimension in data authentication that has not been explored before. A comprehensive experimental evaluation of the proposed and existing approaches is used to validate the analytical models and verify our claims. Our findings exhibit that the proposed solutions improve performance substantially over existing approaches, both for static and dynamic environments.
Resumo:
Lewis proposes "reconceptualization" (p. 1) of how to link the psychology and neurobiology of emotion and cognitive-emotional interactions. His main proposed themes have actually been actively and quantitatively developed in the neural modeling literature for over thirty years. This commentary summarizes some of these themes and points to areas of particularly active research in this area.
Resumo:
Recognition of objects in complex visual scenes is greatly simplified by the ability to segment features belonging to different objects while grouping features belonging to the same object. This feature-binding process can be driven by the local relations between visual contours. The standard method for implementing this process with neural networks uses a temporal code to bind features together. I propose a spatial coding alternative for the dynamic binding of visual contours, and demonstrate the spatial coding method for segmenting an image consisting of three overlapping objects.
Resumo:
This paper demonstrates an optimal control solution to change of machine set-up scheduling based on dynamic programming average cost per stage value iteration as set forth by Cararnanis et. al. [2] for the 2D case. The difficulty with the optimal approach lies in the explosive computational growth of the resulting solution. A method of reducing the computational complexity is developed using ideas from biology and neural networks. A real time controller is described that uses a linear-log representation of state space with neural networks employed to fit cost surfaces.
Resumo:
A dynamic distributed model is presented that reproduces the dynamics of a wide range of varied battle scenarios with a general and abstract representation. The model illustrates the rich dynamic behavior that can be achieved from a simple generic model.
Resumo:
The last 30 years have seen Fuzzy Logic (FL) emerging as a method either complementing or challenging stochastic methods as the traditional method of modelling uncertainty. But the circumstances under which FL or stochastic methods should be used are shrouded in disagreement, because the areas of application of statistical and FL methods are overlapping with differences in opinion as to when which method should be used. Lacking are practically relevant case studies comparing these two methods. This work compares stochastic and FL methods for the assessment of spare capacity on the example of pharmaceutical high purity water (HPW) utility systems. The goal of this study was to find the most appropriate method modelling uncertainty in industrial scale HPW systems. The results provide evidence which suggests that stochastic methods are superior to the methods of FL in simulating uncertainty in chemical plant utilities including HPW systems in typical cases whereby extreme events, for example peaks in demand, or day-to-day variation rather than average values are of interest. The average production output or other statistical measures may, for instance, be of interest in the assessment of workshops. Furthermore the results indicate that the stochastic model should be used only if found necessary by a deterministic simulation. Consequently, this thesis concludes that either deterministic or stochastic methods should be used to simulate uncertainty in chemical plant utility systems and by extension some process system because extreme events or the modelling of day-to-day variation are important in capacity extension projects. Other reasons supporting the suggestion that stochastic HPW models are preferred to FL HPW models include: 1. The computer code for stochastic models is typically less complex than a FL models, thus reducing code maintenance and validation issues. 2. In many respects FL models are similar to deterministic models. Thus the need for a FL model over a deterministic model is questionable in the case of industrial scale HPW systems as presented here (as well as other similar systems) since the latter requires simpler models. 3. A FL model may be difficult to "sell" to an end-user as its results represent "approximate reasoning" a definition of which is, however, lacking. 4. Stochastic models may be applied with some relatively minor modifications on other systems, whereas FL models may not. For instance, the stochastic HPW system could be used to model municipal drinking water systems, whereas the FL HPW model should or could not be used on such systems. This is because the FL and stochastic model philosophies of a HPW system are fundamentally different. The stochastic model sees schedule and volume uncertainties as random phenomena described by statistical distributions based on either estimated or historical data. The FL model, on the other hand, simulates schedule uncertainties based on estimated operator behaviour e.g. tiredness of the operators and their working schedule. But in a municipal drinking water distribution system the notion of "operator" breaks down. 5. Stochastic methods can account for uncertainties that are difficult to model with FL. The FL HPW system model does not account for dispensed volume uncertainty, as there appears to be no reasonable method to account for it with FL whereas the stochastic model includes volume uncertainty.
Resumo:
The composition of equine milk differs considerably from that of the milk of the principal dairying species, i.e., the cow, buffalo, goat and sheep. Because equine milk resembles human milk in many respects and is claimed to have special therapeutic properties, it is becoming increasingly popular in Western Europe, where it is produced on large farms in several countries. Equine milk is considered to be highly digestible, rich in essential nutrients and to possess an optimum whey protein:casein ratio, making it very suitable as a substitute for bovine milk in paediatric dietetics. There is some scientific basis for the special nutritional and health-giving properties of equine milk but this study provides a comprehensive analysis of the composition and physico-chemical properties of equine milk which is required to fully exploit its potential in human nutrition. Quantification and distribution of the nitrogenous components and principal salts of equine milk are reported. The effects of the high concentration of ionic calcium, large casein micelles (~ 260 nm), low protein, lack of a sulphydryl group in equine β-lactoglobulin and a very low level of κ-casein on the physico-chemical properties of equine milk are reported. This thesis provides an insight into the stability of equine casein micelles to heat, ethanol, high pressure, rennet or acid. Differences in rennet- and acid-induced coagulation between equine and bovine milk are attributed not only to the low casein content of equine milk but also to differences in the mechanism by which the respective micelles are stabilized. It has been reported that β-casein plays a role in the stabilization of equine casein micelles and proteomic techniques support this view. In this study, equine κ-casein appeared to be resistant to hydrolysis by calf chymosin but equine β-casein was readily hydrolysed. Resolution of equine milk proteins by urea-PAGE showed the multi-phosphorylated isoforms of equine αs- and β-caseins and capillary zone electrophoresis showed 3 to 7 phosphorylated residues in equine β-casein. In vitro digestion of equine β-casein by pepsin and Corolase PP™ did not produce casomorphins BCM-5 or BCM-7, believed to be harmful to human health. Electron microscopy provided very clear, detailed images of equine casein micelles in their native state and when renneted or acidified. Equine milk formed flocs rather then a gel when renneted or acidified which is supported by dynamic oscillatory analysis. The results presented in this thesis will assist in the development of new products from equine milk for human consumption which will retain some of its unique compositional and health-giving properties.
Resumo:
The desire to obtain competitive advantage is a motivator for implementing Enterprise Resource Planning (ERP) Systems (Adam & O’Doherty, 2000). However, while it is accepted that Information Technology (IT) in general may contribute to the improvement of organisational performance (Melville, Kraemer, & Gurbaxani, 2004), the nature and extent of that contribution is poorly understood (Jacobs & Bendoly, 2003; Ravichandran & Lertwongsatien, 2005). Accordingly, Henderson and Venkatraman (1993) assert that it is the application of business and IT capabilities to develop and leverage a firm’s IT resources for organisational transformation, rather than the acquired technological functionality, that secures competitive advantage for firms. Application of the Resource Based View of the firm (Wernerfelt, 1984) and Dynamic Capabilities Theory (DCT) (Teece and Pisano (1998) in particular) may yield insights into whether or not the use of Enterprise Systems enhances organisations’ core capabilities and thereby obtains competitive advantage, sustainable or otherwise (Melville et al., 2004). An operational definition of Core Capabilities that is independent of the construct of Sustained Competitive Advantage is formulated. This Study proposes and utilises an applied Dynamic Capabilities framework to facilitate the investigation of the role of Enterprise Systems. The objective of this research study is to investigate the role of Enterprise Systems in the Core Dynamic Capabilities of Asset Lifecycle Management. The Study explores the activities of Asset Lifecycle Management, the Core Dynamic Capabilities inherent in Asset Lifecycle Management and the footprint of Enterprise Systems on those Dynamic Capabilities. Additionally, the study explains the mechanisms by which Enterprise Systems sustain the Exploitability and the Renewability of those Core Dynamic Capabilities. The study finds that Enterprise Systems contribute directly to the Value, Exploitability and Renewability of Core Dynamic Capabilities and indirectly to their Inimitability and Non-substitutability. The study concludes by presenting an applied Dynamic Capabilities framework, which integrates Alter (1992)’s definition of Information Systems with Teece and Pisano (1998)’s model of Dynamic Capabilities to provide a robust diagnostic for determining the sustained value generating contributions of Enterprise Systems. These frameworks are used in the conclusions to frame the findings of the study. The conclusions go on to assert that these frameworks are free - standing and analytically generalisable, per Siggelkow (2007) and Yin (2003).
Resumo:
The present study aimed to investigate interactions of components in the high solids systems during storage. The systems included (i) lactose–maltodextrin (MD) with various dextrose equivalents at different mixing ratios, (ii) whey protein isolate (WPI)–oil [olive oil (OO) or sunflower oil (SO)] at 75:25 ratio, and (iii) WPI–oil– {glucose (G)–fructose (F) 1:1 syrup [70% (w/w) total solids]} at a component ratio of 45:15:40. Crystallization of lactose was delayed and increasingly inhibited with increasing MD contents and higher DE values (small molecular size or low molecular weight), although all systems showed similar glass transition temperatures at each aw. The water sorption isotherms of non-crystalline lactose and lactose–MD (0.11 to 0.76 aw) could be derived from the sum of sorbed water contents of individual amorphous components. The GAB equation was fitted to data of all non-crystalline systems. The protein–oil and protein–oil–sugar materials showed maximum protein oxidation and disulfide bonding at 2 weeks of storage at 20 and 40°C. The WPI–OO showed denaturation and preaggregation of proteins during storage at both temperatures. The presence of G–F in WPI–oil increased Tonset and Tpeak of protein aggregation, and oxidative damage of the protein during storage, especially in systems with a higher level of unsaturated fatty acids. Lipid oxidation and glycation products in the systems containing sugar promoted oxidation of proteins, increased changes in protein conformation and aggregation of proteins, and resulted in insolubility of solids or increased hydrophobicity concomitantly with hardening of structure, covalent crosslinking of proteins, and formation of stable polymerized solids, especially after storage at 40°C. We found protein hydration transitions preceding denaturation transitions in all high protein systems and also the glass transition of confined water in protein systems using dynamic mechanical analysis.
Resumo:
High volumes of data traffic along with bandwidth hungry applications, such as cloud computing and video on demand, is driving the core optical communication links closer and closer to their maximum capacity. The research community has clearly identifying the coming approach of the nonlinear Shannon limit for standard single mode fibre [1,2]. It is in this context that the work on modulation formats, contained in Chapter 3 of this thesis, was undertaken. The work investigates the proposed energy-efficient four-dimensional modulation formats. The work begins by studying a new visualisation technique for four dimensional modulation formats, akin to constellation diagrams. The work then carries out one of the first implementations of one such modulation format, polarisation-switched quadrature phase-shift keying (PS-QPSK). This thesis also studies two potential next-generation fibres, few-mode and hollow-core photonic band-gap fibre. Chapter 4 studies ways to experimentally quantify the nonlinearities in few-mode fibre and assess the potential benefits and limitations of such fibres. It carries out detailed experiments to measure the effects of stimulated Brillouin scattering, self-phase modulation and four-wave mixing and compares the results to numerical models, along with capacity limit calculations. Chapter 5 investigates hollow-core photonic band-gap fibre, where such fibres are predicted to have a low-loss minima at a wavelength of 2μm. To benefit from this potential low loss window requires the development of telecoms grade subsystems and components. The chapter will outline some of the development and characterisation of these components. The world's first wavelength division multiplexed (WDM) subsystem directly implemented at 2μm is presented along with WDM transmission over hollow-core photonic band-gap fibre at 2μm. References: [1]P. P. Mitra, J. B. Stark, Nature, 411, 1027-1030, 2001 [2] A. D. Ellis et al., JLT, 28, 423-433, 2010.
Effectuation and its implications for socio-technical design science research in information systems
Resumo:
We study the implications of the effectuation concept for socio-technical artifact design as part of the design science research (DSR) process in information systems (IS). Effectuation logic is the opposite of causal logic. Ef-fectuation does not focus on causes to achieve a particular effect, but on the possibilities that can be achieved with extant means and resources. Viewing so-cio-technical IS DSR through an effectuation lens highlights the possibility to design the future even without set goals. We suggest that effectuation may be a useful perspective for design in dynamic social contexts leading to a more dif-ferentiated view on the instantiation of mid-range artifacts for specific local ap-plication contexts. Design science researchers can draw on this paper’s conclu-sions to view their DSR projects through a fresh lens and to reexamine their re-search design and execution. The paper also offers avenues for future research to develop more concrete application possibilities of effectuation in socio-technical IS DSR and, thus, enrich the discourse.