800 resultados para Redundancy allocation
Resumo:
As more reliance is placed on computing and networking systems, the need for redundancy increases. The Common Address Redundancy Protocol (CARP) protocol and OpenBSD’s pfsync utility provide a means by which to implement redundant routers and firewalls. This paper details how CARP and pfsync work together to provide this redundancy and explores the performance one can expect from the open source solutions. Two experiments were run: one showing the relationship between firewall state creation and state synchronization traffic and the other showing how TCP sessions are transparently maintained in the event of a router failure. Discussion of these simulations along with background information gives an overview of how OpenBSD, CARP, and pfsync can provide redundant routers and firewalls for today’s Internet.
Resumo:
Introducing nitrogen-fixing tree species in fast-growing eucalypt plantations has the potential to improve soil nitrogen availability compared with eucalypt monocultures. Whether or not the changes in soil nutrient status and stand structure will lead to mixtures that out-yield monocultures depends on the balance between positive interactions and the negative effects of interspecific competition, and on their effect on carbon (C) uptake and partitioning. We used a C budget approach to quantify growth, C uptake and C partitioning in monocultures of Eucalyptus grandis (W. Hill ex Maiden) and Acacia mangium (Willd.) (treatments E100 and A100, respectively), and in a mixture at the same stocking density with the two species at a proportion of 1 : 1 (treatment MS). Allometric relationships established over the whole rotation, and measurements of soil CO2 efflux and aboveground litterfall for ages 4-6 years after planting were used to estimate aboveground net primary production (ANPP), total belowground carbon flux (TBCF) and gross primary production (GPP). We tested the hypotheses that (i) species differences for wood production between E. grandis and A. mangium monocultures were partly explained by different C partitioning strategies, and (ii) the observed lower wood production in the mixture compared with eucalypt monoculture was mostly explained by a lower partitioning aboveground. At the end of the rotation, total aboveground biomass was lowest in A100 (10.5 kg DM m(-2)), intermediate in MS (12.2 kg DM m(-2)) and highest in E100 (13.9 kg DM m(-2)). The results did not support our first hypothesis of contrasting C partitioning strategies between E. grandis and A. mangium monocultures: the 21% lower growth (delta B-w) in A100 compared with E100 was almost entirely explained by a 23% lower GPP, with little or no species difference in ratios such as TBCF/GPP, ANPP/TBCF, delta B-w/ANPP and delta B-w/GPP. In contrast, the 28% lower delta B-w in MS than in E100 was explained both by a 15% lower GPP and by a 15% lower fraction of GPP allocated to wood growth, thus partially supporting our second hypothesis: mixing the two species led to shifts in C allocations from above- to belowground, and from growth to litter production, for both species.
Resumo:
Background: The in vitro production (IVP) of embryos by in vitro fertilization or cloning procedures has been known to cause epigenetic changes in the conceptus that in turn are associated with abnormalities in pre- and postnatal development. Handmade cloning (HMC) procedures and the culture of zona-free embryos in individual microwells provide excellent tools for studies in developmental biology, since embryo development and cell allocation patterns can be evaluated under a wide range of embryo reconstruction arrangements and in in vitro embryo culture conditions. As disturbances in embryonic cell allocation after in vitro embryo manipulations and unusual in vivo conditions during the first third of pregnancy appear to be associated with large offspring, embryo aggregation procedures may allow a compensation for epigenetic defects between aggregated embryos or even may influence more favorable cell allocation in embryonic lineages, favoring subsequent development. Thus, the aim of this study was to evaluate in vitro embryo developmental potential and the pattern of cell allocation in blastocysts developed after the aggregation of handmade cloned embryos produced using syngeneic wild type and/or transgenic somatic cells. Materials, Methods & Results: In vitro-matured bovine cumulus-oocyte complexes (COC) were manually bisected after cumulus and zona pellucida removal; then, two enucleated hemi-oocytes were paired and fused with either a wild type (WT) or a GFP-expressing (GFP) fetal skin cell at the 11th and 19th passages, respectively. Following chemical activation, reconstructed cloned embryos and zona-free parthenote embryos were in vitro-cultured in microwells, for 7 days, either individually (1 x 100%) or after the aggregation of two structures (2 x 100%) per microwell, as follows: (G1) one WT cloned embryo; (G2) two aggregated WT embryos; (G3) one GFP cloned embryo; (G4) two aggregated GFP embryos; (G5) aggregation of a WT embryo and a GFP embryo; (G6) one parthenote embryo; or (G7) two aggregated parthenote embryos. Fusion (clones), cleavage (Day 2), and blastocyst (Day 7) rates, and embryonic cell allocation were compared by the. 2 or Fisher tests. Total cell number (TCN) in blastocysts was analyzed by the Student's test (P < 0.05). Fusion and cleavage rates, and cell allocation were similar between groups. On a per WOW basis, development to the blastocyst stage was similar between groups, except for lower rates of development seen in G3. However, when based on number of embryos per group (one or two), blastocyst development was higher in G1 than all other groups, which were similar between one another. Cloned GFP embryos had lower in vitro development to the blastocyst stage than WT embryos, which had more TCN than parthenote or aggregated chimeric WT/GFP embryos. Aggregated GFP embryos had fewer cells than the other embryo groups. Discussion: The in vitro development of GFP cloned embryos was lower than WT embryos, with no effects on cell allocation in resulting blastocysts. Differences in blastocyst rate between groups were likely due to lower GFP-expressing cell viability, as GFP donor cells were at high population cell doublings when used for cloning. On a per embryo basis, embryo aggregation on Day 1 resulted in blastocyst development similar to non-aggregated embryos on Day 7, with no differences in cell proportion between groups. The use of GFP-expressing cells was proven a promising strategy for the study of cell allocation during embryo development, which may assist in the elucidation of mechanisms of abnormalities after in vitro embryo manipulations, leading to the development of improved protocols for the in vitro production (IVP) of bovine embryos.
Resumo:
This paper discusses the power allocation with fixed rate constraint problem in multi-carrier code division multiple access (MC-CDMA) networks, that has been solved through game theoretic perspective by the use of an iterative water-filling algorithm (IWFA). The problem is analyzed under various interference density configurations, and its reliability is studied in terms of solution existence and uniqueness. Moreover, numerical results reveal the approach shortcoming, thus a new method combining swarm intelligence and IWFA is proposed to make practicable the use of game theoretic approaches in realistic MC-CDMA systems scenarios. The contribution of this paper is twofold: (i) provide a complete analysis for the existence and uniqueness of the game solution, from simple to more realist and complex interference scenarios; (ii) propose a hybrid power allocation optimization method combining swarm intelligence, game theory and IWFA. To corroborate the effectiveness of the proposed method, an outage probability analysis in realistic interference scenarios, and a complexity comparison with the classical IWFA are presented. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Understanding the underlying mechanisms that account for the impact of potassium (K) fertilization and its replacement by sodium (Na) on tree growth is key to improving the management of forest plantations that are expanding over weathered tropical soils with low amounts of exchangeable bases. A complete randomized block design was planted with Eucalyptus grandis (W. Hill ex Maiden) to quantify growth, carbon uptake and carbon partitioning using a carbon budget approach. A combination of approaches including the establishment of allometric relationships over the whole rotation and measurements of soil CO2 efflux and aboveground litterfall at the end of the rotation were used to estimate aboveground net production (ANPP), total belowground carbon flux and gross primary production (GPP). The stable carbon isotope (delta C-13) of stem wood alpha-cellulose produced every year was used as a proxy for stomatal limitation of photosynthesis. Potassium fertilization increased GPP and decreased the fraction of carbon allocated belowground. Aboveground net production was strongly enhanced, and because leaf lifespan increased, leaf biomass was enhanced without any change in leaf production, and wood production (P-W) was dramatically increased. Sodium application decreased the fraction of carbon allocated belowground in a similar way, and enhanced GPP, ANPP and P-W, but to a lesser extent compared with K fertilization. Neither K nor Na affected delta C-13 of stem wood alpha-cellulose, suggesting that water-use efficiency was the same among the treatments and that the inferred increase in leaf photosynthesis was not only related to a higher stomatal conductance. We concluded that the response to K fertilization and Na addition on P-W resulted from drastic changes in carbon allocation.
Resumo:
A prevalent claim is that we are in knowledge economy. When we talk about knowledge economy, we generally mean the concept of “Knowledge-based economy” indicating the use of knowledge and technologies to produce economic benefits. Hence knowledge is both tool and raw material (people’s skill) for producing some kind of product or service. In this kind of environment economic organization is undergoing several changes. For example authority relations are less important, legal and ownership-based definitions of the boundaries of the firm are becoming irrelevant and there are only few constraints on the set of coordination mechanisms. Hence what characterises a knowledge economy is the growing importance of human capital in productive processes (Foss, 2005) and the increasing knowledge intensity of jobs (Hodgson, 1999). Economic processes are also highly intertwined with social processes: they are likely to be informal and reciprocal rather than formal and negotiated. Another important point is also the problem of the division of labor: as economic activity becomes mainly intellectual and requires the integration of specific and idiosyncratic skills, the task of dividing the job and assigning it to the most appropriate individuals becomes arduous, a “supervisory problem” (Hogdson, 1999) emerges and traditional hierarchical control may result increasingly ineffective. Not only specificity of know how makes it awkward to monitor the execution of tasks, more importantly, top-down integration of skills may be difficult because ‘the nominal supervisors will not know the best way of doing the job – or even the precise purpose of the specialist job itself – and the worker will know better’ (Hogdson,1999). We, therefore, expect that the organization of the economic activity of specialists should be, at least partially, self-organized. The aim of this thesis is to bridge studies from computer science and in particular from Peer-to-Peer Networks (P2P) to organization theories. We think that the P2P paradigm well fits with organization problems related to all those situation in which a central authority is not possible. We believe that P2P Networks show a number of characteristics similar to firms working in a knowledge-based economy and hence that the methodology used for studying P2P Networks can be applied to organization studies. Three are the main characteristics we think P2P have in common with firms involved in knowledge economy: - Decentralization: in a pure P2P system every peer is an equal participant, there is no central authority governing the actions of the single peers; - Cost of ownership: P2P computing implies shared ownership reducing the cost of owing the systems and the content, and the cost of maintaining them; - Self-Organization: it refers to the process in a system leading to the emergence of global order within the system without the presence of another system dictating this order. These characteristics are present also in the kind of firm that we try to address and that’ why we have shifted the techniques we adopted for studies in computer science (Marcozzi et al., 2005; Hales et al., 2007 [39]) to management science.
Resumo:
This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.
Resumo:
Carbon fluxes and allocation pattern, and their relationship with the main environmental and physiological parameters, were studied in an apple orchard for one year (2010). I combined three widely used methods: eddy covariance, soil respiration and biometric measurements, and I applied a measurement protocol allowing a cross-check between C fluxes estimated using different methods. I attributed NPP components to standing biomass increment, detritus cycle and lateral export. The influence of environmental and physiological parameters on NEE, GPP and Reco was analyzed with a multiple regression model approach. I found that both NEP and GPP of the apple orchard were of similar magnitude to those of forests growing in similar climate conditions, while large differences occurred in the allocation pattern and in the fate of produced biomass. Apple production accounted for 49% of annual NPP, organic material (leaves, fine root litter, pruned wood and early fruit drop) contributing to detritus cycle was 46%, and only 5% went to standing biomass increment. The carbon use efficiency (CUE), with an annual average of 0.68 ± 0.10, was higher than the previously suggested constant values of 0.47-0.50. Light and leaf area index had the strongest influence on both NEE and GPP. On a diurnal basis, NEE and GPP reached their peak approximately at noon, while they appeared to be limited by high values of VPD and air temperature in the afternoon. The proposed models can be used to explain and simulate current relations between carbon fluxes and environmental parameters at daily and yearly time scale. On average, the annual NEP balanced the carbon annually exported with the harvested apples. These data support the hypothesis of a minimal or null impact of the apple orchard ecosystem on net C emission to the atmosphere.
Resumo:
Le scelte di asset allocation costituiscono un problema ricorrente per ogni investitore. Quest’ultimo è continuamente impegnato a combinare diverse asset class per giungere ad un investimento coerente con le proprie preferenze. L’esigenza di supportare gli asset manager nello svolgimento delle proprie mansioni ha alimentato nel tempo una vasta letteratura che ha proposto numerose strategie e modelli di portfolio construction. Questa tesi tenta di fornire una rassegna di alcuni modelli innovativi di previsione e di alcune strategie nell’ambito dell’asset allocation tattica, per poi valutarne i risvolti pratici. In primis verificheremo la sussistenza di eventuali relazioni tra la dinamica di alcune variabili macroeconomiche ed i mercati finanziari. Lo scopo è quello di individuare un modello econometrico capace di orientare le strategie dei gestori nella costruzione dei propri portafogli di investimento. L’analisi prende in considerazione il mercato americano, durante un periodo caratterizzato da rapide trasformazioni economiche e da un’elevata volatilità dei prezzi azionari. In secondo luogo verrà esaminata la validità delle strategie di trading momentum e contrarian nei mercati futures, in particolare quelli dell’Eurozona, che ben si prestano all’implementazione delle stesse, grazie all’assenza di vincoli sulle operazioni di shorting ed ai ridotti costi di transazione. Dall’indagine emerge che entrambe le anomalie si presentano con carattere di stabilità. I rendimenti anomali permangono anche qualora vengano utilizzati i tradizionali modelli di asset pricing, quali il CAPM, il modello di Fama e French e quello di Carhart. Infine, utilizzando l’approccio EGARCH-M, verranno formulate previsioni sulla volatilità dei rendimenti dei titoli appartenenti al Dow Jones. Quest’ultime saranno poi utilizzate come input per determinare le views da inserire nel modello di Black e Litterman. I risultati ottenuti, evidenziano, per diversi valori dello scalare tau, extra rendimenti medi del new combined vector superiori al vettore degli extra rendimenti di equilibrio di mercato, seppur con livelli più elevati di rischio.
Resumo:
This work presents exact algorithms for the Resource Allocation and Cyclic Scheduling Problems (RA&CSPs). Cyclic Scheduling Problems arise in a number of application areas, such as in hoist scheduling, mass production, compiler design (implementing scheduling loops on parallel architectures), software pipelining, and in embedded system design. The RA&CS problem concerns time and resource assignment to a set of activities, to be indefinitely repeated, subject to precedence and resource capacity constraints. In this work we present two constraint programming frameworks facing two different types of cyclic problems. In first instance, we consider the disjunctive RA&CSP, where the allocation problem considers unary resources. Instances are described through the Synchronous Data-flow (SDF) Model of Computation. The key problem of finding a maximum-throughput allocation and scheduling of Synchronous Data-Flow graphs onto a multi-core architecture is NP-hard and has been traditionally solved by means of heuristic (incomplete) algorithms. We propose an exact (complete) algorithm for the computation of a maximum-throughput mapping of applications specified as SDFG onto multi-core architectures. Results show that the approach can handle realistic instances in terms of size and complexity. Next, we tackle the Cyclic Resource-Constrained Scheduling Problem (i.e. CRCSP). We propose a Constraint Programming approach based on modular arithmetic: in particular, we introduce a modular precedence constraint and a global cumulative constraint along with their filtering algorithms. Many traditional approaches to cyclic scheduling operate by fixing the period value and then solving a linear problem in a generate-and-test fashion. Conversely, our technique is based on a non-linear model and tackles the problem as a whole: the period value is inferred from the scheduling decisions. The proposed approaches have been tested on a number of non-trivial synthetic instances and on a set of realistic industrial instances achieving good results on practical size problem.
Resumo:
In my doctoral thesis I investigated the evolution of demographic traits within eusocial Hymenoptera. In the social bees, wasps and ants, eusociality has a unique effect on life span evolution as female larvae with the same genetic background can develop through phenotypic plasticity to a queen or a worker with vastly diverging life-history traits. Ant queens belong to the longest-lived insect species, while workers in most species live only a fraction of the queen’s life span. The average colony size of a species is positively correlated with social complexity, division of labor and diverging morphological female phenotypes all of which also affect life span. Therefore the demographic traits of interest in this thesis were life span and colony size. To understand the evolution of worker life span I applied a trade-off model that includes both hierarchical levels important in eusocial systems, namely the colony- and the individual-level. I showed that the evolution of worker life span may be an adaptive trait on the colony level to optimize resource allocation and therefore fitness in response to different levels of extrinsic mortality. A shorter worker life span as a result of reduced resource investments under high levels of extrinsic mortality increases colony fitness. In a further study I showed that Lasius niger colonies produce different aging phenotypes throughout colony development. Smaller colonies which apply a different foraging strategy than larger colonies produced smaller workers, which in turn have a longer life span as compared to larger workers produced in larger colonies. With the switch to cooperative foraging in growing colonies individual workers become less important for the colony caused by their increasing redundancy. Alternatively a trade of between growth and life span may lead to the results found in this study. A further comparative analysis to study the effect of colony size on life span showed a correlation between queen and worker life span when colony size is taken into account. While neither worker nor queen life span was associated with colony size, the differences between queen and worker life span increase with larger average colony sizes across all eusocial Hymenoptera. As colony size affects both queen and worker life span, I aimed to understand which factors lead to the small colony sizes displayed by some ant species. I therefore analyzed per-capita productivity at different colony sizes of eight cavity dwelling ant species. Most colonies of the study species grew larger than optimal productivity predicted. Larger colony size was shown to increase colony homeostasis, the predictability of future productivity and in turn the survival probability of the colony. I also showed that species that deploy an individual foraging mode may circumvent the density dependent decline in foraging success by splitting the colony to several nest sites.
Resumo:
Classic group recommender systems focus on providing suggestions for a fixed group of people. Our work tries to give an inside look at design- ing a new recommender system that is capable of making suggestions for a sequence of activities, dividing people in subgroups, in order to boost over- all group satisfaction. However, this idea increases problem complexity in more dimensions and creates great challenge to the algorithm’s performance. To understand the e↵ectiveness, due to the enhanced complexity and pre- cise problem solving, we implemented an experimental system from data collected from a variety of web services concerning the city of Paris. The sys- tem recommends activities to a group of users from two di↵erent approaches: Local Search and Constraint Programming. The general results show that the number of subgroups can significantly influence the Constraint Program- ming Approaches’s computational time and e�cacy. Generally, Local Search can find results much quicker than Constraint Programming. Over a lengthy period of time, Local Search performs better than Constraint Programming, with similar final results.
Resumo:
La tesi affronta il problema di Finanza Matematica dell'asset allocation strategica che consiste nel processo di ripartizione ottimale delle risorse tra diverse attività finanziarie presenti su un mercato. Sulla base della teoria di Harry Markowitz, attraverso passaggi matematici rigorosi si costruisce un portafoglio che risponde a dei requisiti di efficienza in termini di rapporto rischio-rendimento. Vengono inoltre forniti esempi di applicazione elaborati attraverso il software Mathematica.
Resumo:
High Performance Computing e una tecnologia usata dai cluster computazionali per creare sistemi di elaborazione che sono in grado di fornire servizi molto piu potenti rispetto ai computer tradizionali. Di conseguenza la tecnologia HPC e diventata un fattore determinante nella competizione industriale e nella ricerca. I sistemi HPC continuano a crescere in termini di nodi e core. Le previsioni indicano che il numero dei nodi arrivera a un milione a breve. Questo tipo di architettura presenta anche dei costi molto alti in termini del consumo delle risorse, che diventano insostenibili per il mercato industriale. Un scheduler centralizzato non e in grado di gestire un numero di risorse cosi alto, mantenendo un tempo di risposta ragionevole. In questa tesi viene presentato un modello di scheduling distribuito che si basa sulla programmazione a vincoli e che modella il problema dello scheduling grazie a una serie di vincoli temporali e vincoli sulle risorse che devono essere soddisfatti. Lo scheduler cerca di ottimizzare le performance delle risorse e tende ad avvicinarsi a un profilo di consumo desiderato, considerato ottimale. Vengono analizzati vari modelli diversi e ognuno di questi viene testato in vari ambienti.