955 resultados para Dynamic Marginal Cost
Resumo:
This paper is concerned with the dynamic analysis of flexible,non-linear multi-body beam systems. The focus is on problems where the strains within each elastic body (beam) remain small. Based on geometrically non-linear elasticity theory, the non-linear 3-D beam problem splits into either a linear or non-linear 2-D analysis of the beam cross-section and a non-linear 1-D analysis along the beam reference line. The splitting of the three-dimensional beam problem into two- and one-dimensional parts, called dimensional reduction,results in a tremendous savings of computational effort relative to the cost of three-dimensional finite element analysis,the only alternative for realistic beams. The analysis of beam-like structures made of laminated composite materials requires a much more complicated methodology. Hence, the analysis procedure based on Variational Asymptotic Method (VAM), a tool to carry out the dimensional reduction, is used here.The analysis methodology can be viewed as a 3-step procedure. First, the sectional properties of beams made of composite materials are determined either based on an asymptotic procedure that involves a 2-D finite element nonlinear analysis of the beam cross-section to capture trapeze effect or using strip-like beam analysis, starting from Classical Laminated Shell Theory (CLST). Second, the dynamic response of non-linear, flexible multi-body beam systems is simulated within the framework of energy-preserving and energy-decaying time integration schemes that provide unconditional stability for non-linear beam systems. Finally,local 3-D responses in the beams are recovered, based on the 1-D responses predicted in the second step. Numerical examples are presented and results from this analysis are compared with those available in the literature.
Resumo:
Earlier studies have exploited statistical multiplexing of flows in the core of the Internet to reduce the buffer requirement in routers. Reducing the memory requirement of routers is important as it enables an improvement in performance and at the same time a decrease in the cost. In this paper, we observe that the links in the core of the Internet are typically over-provisioned and this can be exploited to reduce the buffering requirement in routers. The small on-chip memory of a network processor (NP) can be effectively used to buffer packets during most regimes of traffic. We propose a dynamic buffering strategy which buffers packets in the receive and transmit buffers of a NP when the memory requirement is low. When the buffer requirement increases due to bursts in the traffic, memory is allocated to packets in the off-chip DRAM. This scheme effectively mitigates the DRAM access bottleneck, as only a part of the traffic is stored in the DRAM. We build a Petri net model and evaluate the proposed scheme with core Internet like traffic. At 77% link utilization, the dynamic buffering scheme has a drop rate of just 0.65%, whereas the traditional DRAM buffering has 4.64% packet drop rate. Even with a high link utilization of 90%, which rarely happens in the core, our dynamic buffering results in a packet drop rate of only 2.17%, while supporting a throughput of 7.39 Gbps. We study the proposed scheme under different conditions to understand the provisioning of processing threads and to determine the queue length at which packets must be buffered in the DRAM. We show that the proposed dynamic buffering strategy drastically reduces the buffering requirement while still maintaining low packet drop rates.
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.
Resumo:
Energy harvesting sensor nodes are gaining popularity due to their ability to improve the network life time and are becoming a preferred choice supporting green communication. In this paper, we focus on communicating reliably over an additive white Gaussian noise channel using such an energy harvesting sensor node. An important part of this paper involves appropriate modeling of energy harvesting, as done via various practical architectures. Our main result is the characterization of the Shannon capacity of the communication system. The key technical challenge involves dealing with the dynamic (and stochastic) nature of the (quadratic) cost of the input to the channel. As a corollary, we find close connections between the capacity achieving energy management policies and the queueing theoretic throughput optimal policies.
Uncooled DBR laser directly modulated at 3.125 Gb/s as athermal transmitter for low-cost WDM systems
Resumo:
An uncooled three-section tunable distributed Bragg reflector laser is demonstrated as an athermal transmitter for low-cost uncooled wavelength-division-multiplexing (WDM) systems with tight channel spacing. A ±0.02-nm thermal wavelength drift is achieved under continuous-wave operation up to 70 °C. Dynamic sidemode suppression ratio of greater than 35 dB is consistently obtained under 3.125-Gb/s direct modulation over a 20 °C-70 °C temperature range, with wavelength variation of as low as ±0.2 nm. This indicates that more than an order of magnitude reduction in coarse WDM channel spacing is possible using this source. © 2005 IEEE.
Resumo:
Statistical Process Control (SPC) technique are well established across a wide range of industries. In particular, the plotting of key steady state variables with their statistical limit against time (Shewart charting) is a common approach for monitoring the normality of production. This paper aims with extending Shewart charting techniques to the quality monitoring of variables driven by uncertain dynamic processes, which has particular application in the process industries where it is desirable to monitor process variables on-line as well as final product. The robust approach to dynamic SPC is based on previous work on guaranteed cost filtering for linear systems and is intended to provide a basis for both a wide application of SPC monitoring and also motivate unstructured fault detection.
Resumo:
In noncooperative cost sharing games, individually strategic agents choose resources based on how the welfare (cost or revenue) generated at each resource (which depends on the set of agents that choose the resource) is distributed. The focus is on finding distribution rules that lead to stable allocations, which is formalized by the concept of Nash equilibrium, e.g., Shapley value (budget-balanced) and marginal contribution (not budget-balanced) rules.
Recent work that seeks to characterize the space of all such rules shows that the only budget-balanced distribution rules that guarantee equilibrium existence in all welfare sharing games are generalized weighted Shapley values (GWSVs), by exhibiting a specific 'worst-case' welfare function which requires that GWSV rules be used. Our work provides an exact characterization of the space of distribution rules (not necessarily budget-balanced) for any specific local welfare functions remains, for a general class of scalable and separable games with well-known applications, e.g., facility location, routing, network formation, and coverage games.
We show that all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to GWSV rules on some 'ground' welfare functions. Therefore, it is neither the existence of some worst-case welfare function, nor the restriction of budget-balance, which limits the design to GWSVs. Also, in order to guarantee equilibrium existence, it is necessary to work within the class of potential games, since GWSVs result in (weighted) potential games.
We also provide an alternative characterization—all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to generalized weighted marginal contribution (GWMC) rules on some 'ground' welfare functions. This result is due to a deeper fundamental connection between Shapley values and marginal contributions that our proofs expose—they are equivalent given a transformation connecting their ground welfare functions. (This connection leads to novel closed-form expressions for the GWSV potential function.) Since GWMCs are more tractable than GWSVs, a designer can tradeoff budget-balance with computational tractability in deciding which rule to implement.
Resumo:
139 p.
Resumo:
O reconhecimento de padões é uma área da inteligência computacional que apoia a resolução de problemas utilizando ferramentas computacionais. Dentre esses problemas podem ser citados o reconhecimento de faces, a identificação de impressões digitais e a autenticação de assinaturas. A autenticação de assinaturas de forma automática tem sua relevância pois está ligada ao reconhecimento de indivíduos e suas credenciais em sistemas complexos e a questões financeiras. Neste trabalho é apresentado um estudo dos parâmetros do Dynamic Time Warping, um algoritmo utilizado para alinhar duas assinaturas e medir a similaridade existente entre elas. Variando-se os principais parâmetros desse algoritmo, sobre uma faixa ampla de valores, foram obtidas as médias dos resultados de erros na classificação, e assim, estas médias foram avaliadas. Com base nas primeiras avaliação, foi identificada a necessidade de se calcular um desses parâmetros de forma dinâmica, o gap cost, a fim de ajustá-lo no uso de uma aplicação prática. Uma proposta para a realização deste cálculo é apresentada e também avaliada. É também proposta e avaliada uma maneira alternativa de representação dos atributos da assinatura, de forma a considerar sua curvatura em cada ponto adquirido no processo de aquisição, utilizando os vetores normais como forma de representação. As avaliações realizadas durante as diversas etapas do estudo consideraram o Equal Error Rate (EER) como indicação de qualidade e as técnicas propostas foram comparadas com técnicas já estabelecidas, obtendo uma média percentual de EER de 3,47%.
Resumo:
Measurement of acceleration in dynamic tests is carried out routinely, and in most cases, piezoelectric accelerometers are used at present. However, a new class of instruments based on MEMS technology have become available and are gaining use in many applications due to their small size, low mass and low-cost. This paper describes a centrifuge lateral spreading experiment in which MEMS and piezoelectric accelerometers were placed at similar depths. Good agreement was obtained when the instruments were located in dense sands, but significant differences were observed in loose, liquefiable soils. It was found that the performance of the piezoelectric accelerometer is poor at low frequency, and that the relative phase difference between the piezoelectric and MEMS accelerometer varies significantly at low frequency. © 2010 Taylor & Francis Group, London.
Resumo:
Board-level optical links are an attractive alternative to their electrical counterparts as they provide higher bandwidth and lower power consumption at high data rates. However, on-board optical technology has to be cost-effective to be commercially deployed. This study presents a chip-to-chip optical interconnect formed on an optoelectronic printed circuit board that uses a simple optical coupling scheme, cost-effective materials and is compatible with well-established manufacturing processes common to the electronics industry. Details of the link architecture, modelling studies of the link's frequency response, characterisation of optical coupling efficiencies and dynamic performance studies of this proof-of-concept chip-to-chip optical interconnect are reported. The fully assembled link exhibits a -3 dBe bandwidth of 9 GHz and -3 dBo tolerances to transverse component misalignments of ±25 and ±37 μm at the input and output waveguide interfaces, respectively. The link has a total insertion loss of 6 dBo and achieves error-free transmission at a 10 Gb/s data rate with a power margin of 11.6 dBo for a bit-error-rate of 10 -12. The proposed architecture demonstrates an integration approach for high-speed board-level chip-to-chip optical links that emphasises component simplicity and manufacturability crucial to the migration of such technology into real-world commercial systems. © 2012 The Institution of Engineering and Technology.
Resumo:
Increasing demand for energy and continuing increase in environmental as well as financial cost of use of fossil fuels drive the need for utilization of fuels from sustainable sources for power generation. Development of fuel-flexible combustion systems is vital in enabling the use of sustainable fuels. It is also important that these sustainable combustion systems meet the strict governmental emission legislations. Biogas is considered as one of the viable sustainable fuels that can be used to power modern gas turbines: However, the change in chemical, thermal and transport properties as well as change in Wobbe index due to the variation of the fuel constituents can have a significant effect on the performance of the combustor. It is known that the fuel properties have strong influence on the dynamic flame response; however there is a lack of detailed information regarding the effect of fuel compositions on the sensitivity of the flames subjected to flow perturbations. In this study, we describe an experimental effort investigating the response of premixed biogas-air turbulent flames with varying proportions of CH4 and CO2 to velocity perturbations. The flame was stabilized using a centrally placed conical bluff body. Acoustic perturbations were imposed to the flow using loud speakers. The flame dynamics and the local heat release rate of these acoustically excited biogas flames were studied using simultaneous measurements of OH and H2CO planar laser induced fluorescence. OH* chemiluminescence along with acoustic pressure measurements were also recorded to estimate the total flame heat release modulation and the velocity fluctuations. The measurements were carried out by keeping the theoretical laminar flame speed constant while varying the bulk velocity and the fuel composition. The results indicate that the flame sensitivity to perturbations increased with increased dilution of CH4 by CO2 at low amplitude forcing, while at high amplitude forcing conditions the magnitude of the flame response was independent of dilution.
Resumo:
The study of random dynamic systems usually requires the definition of an ensemble of structures and the solution of the eigenproblem for each member of the ensemble. If the process is carried out using a conventional numerical approach, the computational cost becomes prohibitive for complex systems. In this work, an alternative numerical method is proposed. The results for the response statistics are compared with values obtained from a detailed stochastic FE analysis of plates. The proposed method seems to capture the statistical behaviour of the response with a reduced computational cost.
Resumo:
A simple, low-cost, and efficient airlift photobioreactor for microalgal mass culture was designed and developed. The reactor was made of Plexiglas, and composed of three major parts: outer tube, draft tube and air duct. The fluid-dynamic characteristics of the airlift reactor were studied. The system proved to be well suited to the mass cultivation of a marine microalga, Chlorella sp. In batch culture, the biomass volumetric output rate of 0.21 g l(-1) d(-1) was obtained at the superficial gas velocity of 4 mm s(-1) in the draft tube.
Resumo:
The Basic Income has been defined as a relatively small income that the public Administration unconditionally provides to all its members as a citizenship right. Its principal objective consists on guaranteeing the entire population with an income enough to satisfy living basic needs, but it could have other positive effects such as a more equally income redistribution or tax fraud fighting, as well as some drawbacks, like the labor supply disincentives. In this essay we present the argument in favor and against this policy and ultimately define how it could be financed according to the actual tax and social benefits’ system in Navarra. The research also approaches the main economic implications of the proposal, both in terms of static income redistribution and discusses other relevant dynamic uncertainties.