955 resultados para Dynamic Marginal Cost
Resumo:
The increasing diffusion of wireless-enabled portable devices is pushing toward the design of novel service scenarios, promoting temporary and opportunistic interactions in infrastructure-less environments. Mobile Ad Hoc Networks (MANET) are the general model of these higly dynamic networks that can be specialized, depending on application cases, in more specific and refined models such as Vehicular Ad Hoc Networks and Wireless Sensor Networks. Two interesting deployment cases are of increasing relevance: resource diffusion among users equipped with portable devices, such as laptops, smart phones or PDAs in crowded areas (termed dense MANET) and dissemination/indexing of monitoring information collected in Vehicular Sensor Networks. The extreme dynamicity of these scenarios calls for novel distributed protocols and services facilitating application development. To this aim we have designed middleware solutions supporting these challenging tasks. REDMAN manages, retrieves, and disseminates replicas of software resources in dense MANET; it implements novel lightweight protocols to maintain a desired replication degree despite participants mobility, and efficiently perform resource retrieval. REDMAN exploits the high-density assumption to achieve scalability and limited network overhead. Sensed data gathering and distributed indexing in Vehicular Networks raise similar issues: we propose a specific middleware support, called MobEyes, exploiting node mobility to opportunistically diffuse data summaries among neighbor vehicles. MobEyes creates a low-cost opportunistic distributed index to query the distributed storage and to determine the location of needed information. Extensive validation and testing of REDMAN and MobEyes prove the effectiveness of our original solutions in limiting communication overhead while maintaining the required accuracy of replication degree and indexing completeness, and demonstrates the feasibility of the middleware approach.
Resumo:
In fluid dynamics research, pressure measurements are of great importance to define the flow field acting on aerodynamic surfaces. In fact the experimental approach is fundamental to avoid the complexity of the mathematical models for predicting the fluid phenomena. It’s important to note that, using in-situ sensor to monitor pressure on large domains with highly unsteady flows, several problems are encountered working with the classical techniques due to the transducer cost, the intrusiveness, the time response and the operating range. An interesting approach for satisfying the previously reported sensor requirements is to implement a sensor network capable of acquiring pressure data on aerodynamic surface using a wireless communication system able to collect the pressure data with the lowest environmental–invasion level possible. In this thesis a wireless sensor network for fluid fields pressure has been designed, built and tested. To develop the system, a capacitive pressure sensor, based on polymeric membrane, and read out circuitry, based on microcontroller, have been designed, built and tested. The wireless communication has been performed using the Zensys Z-WAVE platform, and network and data management have been implemented. Finally, the full embedded system with antenna has been created. As a proof of concept, the monitoring of pressure on the top of the mainsail in a sailboat has been chosen as working example.
Resumo:
In the recent decade, the request for structural health monitoring expertise increased exponentially in the United States. The aging issues that most of the transportation structures are experiencing can put in serious jeopardy the economic system of a region as well as of a country. At the same time, the monitoring of structures is a central topic of discussion in Europe, where the preservation of historical buildings has been addressed over the last four centuries. More recently, various concerns arose about security performance of civil structures after tragic events such the 9/11 or the 2011 Japan earthquake: engineers looks for a design able to resist exceptional loadings due to earthquakes, hurricanes and terrorist attacks. After events of such a kind, the assessment of the remaining life of the structure is at least as important as the initial performance design. Consequently, it appears very clear that the introduction of reliable and accessible damage assessment techniques is crucial for the localization of issues and for a correct and immediate rehabilitation. The System Identification is a branch of the more general Control Theory. In Civil Engineering, this field addresses the techniques needed to find mechanical characteristics as the stiffness or the mass starting from the signals captured by sensors. The objective of the Dynamic Structural Identification (DSI) is to define, starting from experimental measurements, the modal fundamental parameters of a generic structure in order to characterize, via a mathematical model, the dynamic behavior. The knowledge of these parameters is helpful in the Model Updating procedure, that permits to define corrected theoretical models through experimental validation. The main aim of this technique is to minimize the differences between the theoretical model results and in situ measurements of dynamic data. Therefore, the new model becomes a very effective control practice when it comes to rehabilitation of structures or damage assessment. The instrumentation of a whole structure is an unfeasible procedure sometimes because of the high cost involved or, sometimes, because it’s not possible to physically reach each point of the structure. Therefore, numerous scholars have been trying to address this problem. In general two are the main involved methods. Since the limited number of sensors, in a first case, it’s possible to gather time histories only for some locations, then to move the instruments to another location and replay the procedure. Otherwise, if the number of sensors is enough and the structure does not present a complicate geometry, it’s usually sufficient to detect only the principal first modes. This two problems are well presented in the works of Balsamo [1] for the application to a simple system and Jun [2] for the analysis of system with a limited number of sensors. Once the system identification has been carried, it is possible to access the actual system characteristics. A frequent practice is to create an updated FEM model and assess whether the structure fulfills or not the requested functions. Once again the objective of this work is to present a general methodology to analyze big structure using a limited number of instrumentation and at the same time, obtaining the most information about an identified structure without recalling methodologies of difficult interpretation. A general framework of the state space identification procedure via OKID/ERA algorithm is developed and implemented in Matlab. Then, some simple examples are proposed to highlight the principal characteristics and advantage of this methodology. A new algebraic manipulation for a prolific use of substructuring results is developed and implemented.
Resumo:
Mountainous areas are prone to natural hazards like rockfalls. Among the many countermeasures, rockfall protection barriers represent an effective solution to mitigate the risk. They are metallic structures designed to intercept rocks falling from unstable slopes, thus dissipating the energy deriving from the impact. This study aims at providing a better understanding of the response of several rockfall barrier types, through the development of rather sophisticated three-dimensional numerical finite elements models which take into account for the highly dynamic and non-linear conditions of such events. The models are built considering the actual geometrical and mechanical properties of real systems. Particular attention is given to the connecting details between the structural components and to their interactions. The importance of the work lies in being able to support a wide experimental activity with appropriate numerical modelling. The data of several full-scale tests carried out on barrier prototypes, as well as on their structural components, are combined with results of numerical simulations. Though the models are designed with relatively simple solutions in order to obtain a low computational cost of the simulations, they are able to reproduce with great accuracy the test results, thus validating the reliability of the numerical strategy proposed for the design of these structures. The developed models have shown to be readily applied to predict the barrier performance under different possible scenarios, by varying the initial configuration of the structures and/or of the impact conditions. Furthermore, the numerical models enable to optimize the design of these structures and to evaluate the benefit of possible solutions. Finally it is shown they can be also used as a valuable supporting tool for the operators within a rockfall risk assessment procedure, to gain crucial understanding of the performance of existing barriers in working conditions.
Resumo:
In linear mixed models, model selection frequently includes the selection of random effects. Two versions of the Akaike information criterion (AIC) have been used, based either on the marginal or on the conditional distribution. We show that the marginal AIC is no longer an asymptotically unbiased estimator of the Akaike information, and in fact favours smaller models without random effects. For the conditional AIC, we show that ignoring estimation uncertainty in the random effects covariance matrix, as is common practice, induces a bias that leads to the selection of any random effect not predicted to be exactly zero. We derive an analytic representation of a corrected version of the conditional AIC, which avoids the high computational cost and imprecision of available numerical approximations. An implementation in an R package is provided. All theoretical results are illustrated in simulation studies, and their impact in practice is investigated in an analysis of childhood malnutrition in Zambia.
Resumo:
OBJECTIVE: To investigate the cost effectiveness of screening for Chlamydia trachomatis compared with a policy of no organised screening in the United Kingdom. DESIGN: Economic evaluation using a transmission dynamic mathematical model. SETTING: Central and southwest England. PARTICIPANTS: Hypothetical population of 50,000 men and women, in which all those aged 16-24 years were invited to be screened each year. MAIN OUTCOME MEASURES: Cost effectiveness based on major outcomes averted, defined as pelvic inflammatory disease, ectopic pregnancy, infertility, or neonatal complications. RESULTS: The incremental cost per major outcome averted for a programme of screening women only (assuming eight years of screening) was 22,300 pounds (33,000 euros; $45,000) compared with no organised screening. For a programme screening both men and women, the incremental cost effectiveness ratio was approximately 28,900 pounds. Pelvic inflammatory disease leading to hospital admission was the most frequently averted major outcome. The model was highly sensitive to the incidence of major outcomes and to uptake of screening. When both were increased the cost effectiveness ratio fell to 6200 pound per major outcome averted for screening women only. CONCLUSIONS: Proactive register based screening for chlamydia is not cost effective if the uptake of screening and incidence of complications are based on contemporary empirical studies, which show lower rates than commonly assumed. These data are relevant to discussions about the cost effectiveness of the opportunistic model of chlamydia screening being introduced in England.
Resumo:
The past decade has seen the energy consumption in servers and Internet Data Centers (IDCs) skyrocket. A recent survey estimated that the worldwide spending on servers and cooling have risen to above $30 billion and is likely to exceed spending on the new server hardware . The rapid rise in energy consumption has posted a serious threat to both energy resources and the environment, which makes green computing not only worthwhile but also necessary. This dissertation intends to tackle the challenges of both reducing the energy consumption of server systems and by reducing the cost for Online Service Providers (OSPs). Two distinct subsystems account for most of IDC’s power: the server system, which accounts for 56% of the total power consumption of an IDC, and the cooling and humidifcation systems, which accounts for about 30% of the total power consumption. The server system dominates the energy consumption of an IDC, and its power draw can vary drastically with data center utilization. In this dissertation, we propose three models to achieve energy effciency in web server clusters: an energy proportional model, an optimal server allocation and frequency adjustment strategy, and a constrained Markov model. The proposed models have combined Dynamic Voltage/Frequency Scaling (DV/FS) and Vary-On, Vary-off (VOVF) mechanisms that work together for more energy savings. Meanwhile, corresponding strategies are proposed to deal with the transition overheads. We further extend server energy management to the IDC’s costs management, helping the OSPs to conserve, manage their own electricity cost, and lower the carbon emissions. We have developed an optimal energy-aware load dispatching strategy that periodically maps more requests to the locations with lower electricity prices. A carbon emission limit is placed, and the volatility of the carbon offset market is also considered. Two energy effcient strategies are applied to the server system and the cooling system respectively. With the rapid development of cloud services, we also carry out research to reduce the server energy in cloud computing environments. In this work, we propose a new live virtual machine (VM) placement scheme that can effectively map VMs to Physical Machines (PMs) with substantial energy savings in a heterogeneous server cluster. A VM/PM mapping probability matrix is constructed, in which each VM request is assigned with a probability running on PMs. The VM/PM mapping probability matrix takes into account resource limitations, VM operation overheads, server reliability as well as energy effciency. The evolution of Internet Data Centers and the increasing demands of web services raise great challenges to improve the energy effciency of IDCs. We also express several potential areas for future research in each chapter.
Resumo:
We consider an economic order quantity model where the supplier offers an all-units quantity discount and a price sensitive customer demand. We compare a decentralized decision framework where selling price and replenishment policy are determined independently to simultaneous decision making. Constant and dynamic pricing are distinguished. We derive structural properties and develop algorithms that determine the optimal pricing and replenishment policy and show how quantity discounts not only influence the purchasing strategy but also the pricing policy. A sensitivity analysis indicates the impact of the fixed-holding cost ratio, the discount policy, and the customers' price sensitivity on the optimal decisions.
Resumo:
Continuous conveyors with a dynamic merge were developed with adaptable control equipment to differentiate these merges from competing Stop-and-Go merges. With a dynamic merge, the partial flows are manipulated by influencing speeds so that transport units need not stop for the merge. This leads to a more uniform flow of materials, which is qualitatively observable and verifiable in long-term measurements. And although this type of merge is visually mesmerizing, does it lead to advantages from the view of material flow technology? Our study with real data indicates that a dynamic merge shows a 24% increase in performance, but only for symmetric or nearly symmetric flows. This performance advantage decreases as the flows become less symmetric, approaching the throughput of traditional Stop-and-Go merges. And with a cost premium for a continuous merge of approximately 10% due to the additional technical components (belt conveyor, adjustable drive engines, software, etc.), this restricts their economical use.
Resumo:
Exposure Fusion and other HDR techniques generate well-exposed images from a bracketed image sequence while reproducing a large dynamic range that far exceeds the dynamic range of a single exposure. Common to all these techniques is the problem that the smallest movements in the captured images generate artefacts (ghosting) that dramatically affect the quality of the final images. This limits the use of HDR and Exposure Fusion techniques because common scenes of interest are usually dynamic. We present a method that adapts Exposure Fusion, as well as standard HDR techniques, to allow for dynamic scene without introducing artefacts. Our method detects clusters of moving pixels within a bracketed exposure sequence with simple binary operations. We show that the proposed technique is able to deal with a large amount of movement in the scene and different movement configurations. The result is a ghost-free and highly detailed exposure fused image at a low computational cost.
Resumo:
BACKGROUND Partner notification is essential to the comprehensive case management of sexually transmitted infections. Systematic reviews and mathematical modelling can be used to synthesise information about the effects of new interventions to enhance the outcomes of partner notification. OBJECTIVE To study the effectiveness and cost-effectiveness of traditional and new partner notification technologies for curable sexually transmitted infections (STIs). DESIGN Secondary data analysis of clinical audit data; systematic reviews of randomised controlled trials (MEDLINE, EMBASE and Cochrane Central Register of Controlled Trials) published from 1 January 1966 to 31 August 2012 and of studies of health-related quality of life (HRQL) [MEDLINE, EMBASE, ISI Web of Knowledge, NHS Economic Evaluation Database (NHS EED), Database of Abstracts of Reviews of Effects (DARE) and Health Technology Assessment (HTA)] published from 1 January 1980 to 31 December 2011; static models of clinical effectiveness and cost-effectiveness; and dynamic modelling studies to improve parameter estimation and examine effectiveness. SETTING General population and genitourinary medicine clinic attenders. PARTICIPANTS Heterosexual women and men. INTERVENTIONS Traditional partner notification by patient or provider referral, and new partner notification by expedited partner therapy (EPT) or its UK equivalent, accelerated partner therapy (APT). MAIN OUTCOME MEASURES Population prevalence; index case reinfection; and partners treated per index case. RESULTS Enhanced partner therapy reduced reinfection in index cases with curable STIs more than simple patient referral [risk ratio (RR) 0.71; 95% confidence interval (CI) 0.56 to 0.89]. There are no randomised trials of APT. The median number of partners treated for chlamydia per index case in UK clinics was 0.60. The number of partners needed to treat to interrupt transmission of chlamydia was lower for casual than for regular partners. In dynamic model simulations, > 10% of partners are chlamydia positive with look-back periods of up to 18 months. In the presence of a chlamydia screening programme that reduces population prevalence, treatment of current partners achieves most of the additional reduction in prevalence attributable to partner notification. Dynamic model simulations show that cotesting and treatment for chlamydia and gonorrhoea reduce the prevalence of both STIs. APT has a limited additional effect on prevalence but reduces the rate of index case reinfection. Published quality-adjusted life-year (QALY) weights were of insufficient quality to be used in a cost-effectiveness study of partner notification in this project. Using an intermediate outcome of cost per infection diagnosed, doubling the efficacy of partner notification from 0.4 to 0.8 partners treated per index case was more cost-effective than increasing chlamydia screening coverage. CONCLUSIONS There is evidence to support the improved clinical effectiveness of EPT in reducing index case reinfection. In a general heterosexual population, partner notification identifies new infected cases but the impact on chlamydia prevalence is limited. Partner notification to notify casual partners might have a greater impact than for regular partners in genitourinary clinic populations. Recommendations for future research are (1) to conduct randomised controlled trials using biological outcomes of the effectiveness of APT and of methods to increase testing for human immunodeficiency virus (HIV) and STIs after APT; (2) collection of HRQL data should be a priority to determine QALYs associated with the sequelae of curable STIs; and (3) standardised parameter sets for curable STIs should be developed for mathematical models of STI transmission that are used for policy-making. FUNDING The National Institute for Health Research Health Technology Assessment programme.
Resumo:
Cost-efficient operation while satisfying performance and availability guarantees in Service Level Agreements (SLAs) is a challenge for Cloud Computing, as these are potentially conflicting objectives. We present a framework for SLA management based on multi-objective optimization. The framework features a forecasting model for determining the best virtual machine-to-host allocation given the need to minimize SLA violations, energy consumption and resource wasting. A comprehensive SLA management solution is proposed that uses event processing for monitoring and enables dynamic provisioning of virtual machines onto the physical infrastructure. We validated our implementation against serveral standard heuristics and were able to show that our approach is significantly better.
Resumo:
Received signal strength-based localization systems usually rely on a calibration process that aims at characterizing the propagation channel. However, due to the changing environmental dynamics, the behavior of the channel may change after some time, thus, recalibration processes are necessary to maintain the positioning accuracy. This paper proposes a dynamic calibration method to initially calibrate and subsequently update the parameters of the propagation channel model using a Least Mean Squares approach. The method assumes that each anchor node in the localization infrastructure is characterized by its own propagation channel model. In practice, a set of sniffers is used to collect RSS samples, which will be used to automatically calibrate each channel model by iteratively minimizing the positioning error. The proposed method is validated through numerical simulation, showing that the positioning error of the mobile nodes is effectively reduced. Furthermore, the method has a very low computational cost; therefore it can be used in real-time operation for wireless resource-constrained nodes.
Resumo:
Traditional logic programming languages, such as Prolog, use a fixed left-to-right atom scheduling rule. Recent logic programming languages, however, usually provide more flexible scheduling in which computation generally proceeds leftto- right but in which some calis are dynamically "delayed" until their arguments are sufRciently instantiated to allow the cali to run efficiently. Such dynamic scheduling has a significant cost. We give a framework for the global analysis of logic programming languages with dynamic scheduling and show that program analysis based on this framework supports optimizations which remove much of the overhead of dynamic scheduling.
Resumo:
Underpasses are common in modern railway lines. Wildlife corridors and drainage conduits often fall into this category of partially buried structures. Their dynamic behavior has received far less attention than that of other structures such as bridges, but their large number makes their study an interesting challenge from the viewpoint of safety and cost savings. Here, we present a complete study of a culvert, including on-site measurements and numerical modeling. The studied structure belongs to the high-speed railway line linking Segovia and Valladolid in Spain. The line was opened to traffic in 2004. On-site measurements were performed for the structure by recording the dynamic response at selected points of the structure during the passage of high-speed trains at speeds ranging between 200 and 300 km/h. The measurements provide not only reference values suitable for model fitting, but also a good insight into the main features of the dynamic behavior of this structure. Finite element techniques were used to model the dynamic behavior of the structure and its key features. Special attention is paid to vertical accelerations, the values of which should be limited to avoid track instability according to Eurocode. This study furthers our understanding of the dynamic response of railway underpasses to train loads.