965 resultados para Approximat Model (scheme)
Resumo:
The purpose of this study was to design a preventive scheme using directional antennas to improve the performance of mobile ad hoc networks. In this dissertation, a novel Directionality based Preventive Link Maintenance (DPLM) Scheme is proposed to characterize the performance gain [JaY06a, JaY06b, JCY06] by extending the life of link. In order to maintain the link and take preventive action, signal strength of data packets is measured. Moreover, location information or angle of arrival information is collected during communication and saved in the table. When measured signal strength is below orientation threshold , an orientation warning is generated towards the previous hop node. Once orientation warning is received by previous hop (adjacent) node, it verifies the correctness of orientation warning with few hello pings and initiates high quality directional link (a link above the threshold) and immediately switches to it, avoiding a link break altogether. The location information is utilized to create a directional link by orienting neighboring nodes antennas towards each other. We call this operation an orientation handoff, which is similar to soft-handoff in cellular networks. ^ Signal strength is the indicating factor, which represents the health of the link and helps to predict the link failure. In other words, link breakage happens due to node movement and subsequently reducing signal strength of receiving packets. DPLM scheme helps ad hoc networks to avoid or postpone costly operation of route rediscovery in on-demand routing protocols by taking above-mentioned preventive action. ^ This dissertation advocates close but simple collaboration between the routing, medium access control and physical layers. In order to extend the link, the Dynamic Source Routing (DSR) and IEEE 802.11 MAC protocols were modified to use the ability of directional antennas to transmit over longer distance. A directional antenna module is implemented in OPNET simulator with two separate modes of operations: omnidirectional and directional. The antenna module has been incorporated in wireless node model and simulations are performed to characterize the performance improvement of mobile ad hoc networks. Extensive simulations have shown that without affecting the behavior of the routing protocol noticeably, aggregate throughput, packet delivery ratio, end-to-end delay (latency), routing overhead, number of data packets dropped, and number of path breaks are improved considerably. We have done the analysis of the results in different scenarios to evaluate that the use of directional antennas with proposed DPLM scheme has been found promising to improve the performance of mobile ad hoc networks. ^
Resumo:
This paper describes the implementation of a novel mitigation approach and subsequent adaptive management, designed to reduce the transfer of fine sediment in Glaisdale Beck; a small upland catchment in the UK. Hydro-meteorological and suspended sediment datasets are collected over a two year period spanning pre- and post-diversion periods in order to assess the impact of the channel reconfiguration scheme on the fluvial suspended sediment dynamics. Analysis of the river response demonstrates that the fluvial sediment system has become more restrictive with reduced fine sediment transfer. This is characterised by reductions in flow-weighted mean suspended sediment concentrations from 77.93 mg/l prior to mitigation, to 74.36 mg/l following the diversion. A Mann-Whitney U test found statistically significant differences (p < 0.001) between the pre- and post-monitoring median SSCs. Whilst application of one-way analysis of covariance (ANCOVA) on the coefficients of sediment rating curves developed before and after the diversion found statistically significant differences (p < 0.001), with both Log a and b coefficients becoming smaller following the diversion. Non-parametric analysis indicates a reduction in residuals through time (p < 0.001), with the developed LOWESS model over-predicting sediment concentrations as the channel stabilises. However, the channel is continuing to adjust to the reconfigured morphology, with evidence of a headward propagating knickpoint which has migrated 120 m at an exponentially decreasing rate over the last 7 years since diversion. The study demonstrates that channel reconfiguration can be effective in mitigating fine sediment flux in upland streams but the full value of this may take many years to achieve whilst the fluvial system, slowly readjusts.
Resumo:
It was recently shown [Phys. Rev. Lett. 110, 227201 (2013)] that the critical behavior of the random-field Ising model in three dimensions is ruled by a single universality class. This conclusion was reached only after a proper taming of the large scaling corrections of the model by applying a combined approach of various techniques, coming from the zero-and positive-temperature toolboxes of statistical physics. In the present contribution we provide a detailed description of this combined scheme, explaining in detail the zero-temperature numerical scheme and developing the generalized fluctuation-dissipation formula that allowed us to compute connected and disconnected correlation functions of the model. We discuss the error evolution of our method and we illustrate the infinite limit-size extrapolation of several observables within phenomenological renormalization. We present an extension of the quotients method that allows us to obtain estimates of the critical exponent a of the specific heat of the model via the scaling of the bond energy and we discuss the self-averaging properties of the system and the algorithmic aspects of the maximum-flow algorithm used.
Resumo:
First-order transitions of system where both lattice site occupancy and lattice spacing fluctuate, such as cluster crystals, cannot be efficiently studied by traditional simulation methods, which necessarily fix one of these two degrees of freedom. The difficulty, however, can be surmounted by the generalized [N]pT ensemble [J. Chem. Phys. 136, 214106 (2012)]. Here we show that histogram reweighting and the [N]pT ensemble can be used to study an isostructural transition between cluster crystals of different occupancy in the generalized exponential model of index 4 (GEM-4). Extending this scheme to finite-size scaling studies also allows us to accurately determine the critical point parameters and to verify that it belongs to the Ising universality class.
Resumo:
The primary objective is to investigate the main factors contributing to GMS expenditure on pharmaceutical prescribing and projecting this expenditure to 2026. This study is located in the area of pharmacoeconomic cost containment and projections literature. The thesis has five main aims: 1. To determine the main factors contributing to GMS expenditure on pharmaceutical prescribing. 2. To develop a model to project GMS prescribing expenditure in five year intervals to 2026, using 2006 Central Statistics Office (CSO) Census data and 2007 Health Service Executive{Primary Care Reimbursement Service (HSE{PCRS) sample data. 3. To develop a model to project GMS prescribing expenditure in five year intervals to 2026, using 2012 HSE{PCRS population data, incorporating cost containment measures, and 2011 CSO Census data. 4. To investigate the impact of demographic factors and the pharmacology of drugs (Anatomical Therapeutic Chemical (ATC)) on GMS expenditure. 5. To explore the consequences of GMS policy changes on prescribing expenditure and behaviour between 2008 and 2014. The thesis is centered around three published articles and is located between the end of a booming Irish economy in 2007, a recession from 2008{2013, to the beginning of a recovery in 2014. The literature identified a number of factors influencing pharmaceutical expenditure, including population growth, population aging, changes in drug utilisation and drug therapies, age, gender and location. The literature identified the methods previously used in predictive modelling and consequently, the Monte Carlo Simulation (MCS) model was used to simulate projected expenditures to 2026. Also, the literature guided the use of Ordinary Least Squares (OLS) regression in determining demographic and pharmacology factors influencing prescribing expenditure. The study commences against a backdrop of growing GMS prescribing costs, which has risen from e250 million in 1998 to over e1 billion by 2007. Using a sample 2007 HSE{PCRS prescribing data (n=192,000) and CSO population data from 2008, (Conway et al., 2014) estimated GMS prescribing expenditure could rise to e2 billion by2026. The cogency of these findings was impacted by the global economic crisis of 2008, which resulted in a sharp contraction in the Irish economy, mounting fiscal deficits resulting in Ireland's entry to a bailout programme. The sustainability of funding community drug schemes, such as the GMS, came under the spotlight of the EU, IMF, ECB (Trioka), who set stringent targets for reducing drug costs, as conditions of the bailout programme. Cost containment measures included: the introduction of income eligibility limits for GP visit cards and medical cards for those aged 70 and over, introduction of co{payments for prescription items, reductions in wholesale mark{up and pharmacy dispensing fees. Projections for GMS expenditure were reevaluated using 2012 HSE{PCRS prescribing population data and CSO population data based on Census 2011. Taking into account both cost containment measures and revised population predictions, GMS expenditure is estimated to increase by 64%, from e1.1 billion in 2016 to e1.8 billion by 2026, (ConwayLenihan and Woods, 2015). In the final paper, a cross{sectional study was carried out on HSE{PCRS population prescribing database (n=1.63 million claimants) to investigate the impact of demographic factors, and the pharmacology of the drugs, on GMS prescribing expenditure. Those aged over 75 (ẞ = 1:195) and cardiovascular prescribing (ẞ = 1:193) were the greatest contributors to annual GMS prescribing costs. Respiratory drugs (Montelukast) recorded the highest proportion and expenditure for GMS claimants under the age of 15. Drugs prescribed for the nervous system (Escitalopram, Olanzapine and Pregabalin) were highest for those between 16 and 64 years with cardiovascular drugs (Statins) were highest for those aged over 65. Females are more expensive than males and are prescribed more items across the four ATC groups, except among children under 11, (ConwayLenihan et al., 2016). This research indicates that growth in the proportion of the elderly claimants and associated levels of cardiovascular prescribing, particularly for statins, will present difficulties for Ireland in terms of cost containment. Whilst policies aimed at cost containment (co{payment charges, generic substitution, reference pricing, adjustments to GMS eligibility) can be used to curtail expenditure, health promotional programs and educational interventions should be given equal emphasis. Also policies intended to affect physicians prescribing behaviour include guidelines, information (about price and less expensive alternatives) and feedback, and the use of budgetary restrictions could yield savings.
Resumo:
Adjoint methods have proven to be an efficient way of calculating the gradient of an objective function with respect to a shape parameter for optimisation, with a computational cost nearly independent of the number of the design variables [1]. The approach in this paper links the adjoint surface sensitivities (gradient of objective function with respect to the surface movement) with the parametric design velocities (movement of the surface due to a CAD parameter perturbation) in order to compute the gradient of the objective function with respect to CAD variables.
For a successful implementation of shape optimization strategies in practical industrial cases, the choice of design variables or parameterisation scheme used for the model to be optimized plays a vital role. Where the goal is to base the optimization on a CAD model the choices are to use a NURBS geometry generated from CAD modelling software, where the position of the NURBS control points are the optimisation variables [2] or to use the feature based CAD model with all of the construction history to preserve the design intent [3]. The main advantage of using the feature based model is that the optimized model produced can be directly used for the downstream applications including manufacturing and process planning.
This paper presents an approach for optimization based on the feature based CAD model, which uses CAD parameters defining the features in the model geometry as the design variables. In order to capture the CAD surface movement with respect to the change in design variable, the “Parametric Design Velocity” is calculated, which is defined as the movement of the CAD model boundary in the normal direction due to a change in the parameter value.
The approach presented here for calculating the design velocities represents an advancement in terms of capability and robustness of that described by Robinson et al. [3]. The process can be easily integrated to most industrial optimisation workflows and is immune to the topology and labelling issues highlighted by other CAD based optimisation processes. It considers every continuous (“real value”) parameter type as an optimisation variable, and it can be adapted to work with any CAD modelling software, as long as it has an API which provides access to the values of the parameters which control the model shape and allows the model geometry to be exported. To calculate the movement of the boundary the methodology employs finite differences on the shape of the 3D CAD models before and after the parameter perturbation. The implementation procedure includes calculating the geometrical movement along a normal direction between two discrete representations of the original and perturbed geometry respectively. Parametric design velocities can then be directly linked with adjoint surface sensitivities to extract the gradients to use in a gradient-based optimization algorithm.
The optimisation of a flow optimisation problem is presented, in which the power dissipation of the flow in an automotive air duct is to be reduced by changing the parameters of the CAD geometry created in CATIA V5. The flow sensitivities are computed with the continuous adjoint method for a laminar and turbulent flow [4] and are combined with the parametric design velocities to compute the cost function gradients. A line-search algorithm is then used to update the design variables and proceed further with optimisation process.
Resumo:
Background
Increasing physical activity in the workplace can provide employee physical and mental health benefits, and employer economic benefits through reduced absenteeism and increased productivity. The workplace is an opportune setting to encourage habitual activity. However, there is limited evidence on effective behaviour change interventions that lead to maintained physical activity. This study aims to address this gap and help build the necessary evidence base for effective, and cost-effective, workplace interventions
Methods/design
This cluster randomised control trial will recruit 776 office-based employees from public sector organisations in Belfast and Lisburn city centres, Northern Ireland. Participants will be randomly allocated by cluster to either the Intervention Group or Control Group (waiting list control). The 6-month intervention consists of rewards (retail vouchers, based on similar principles to high street loyalty cards), feedback and other evidence-based behaviour change techniques. Sensors situated in the vicinity of participating workplaces will promote and monitor minutes of physical activity undertaken by participants. Both groups will complete all outcome measures. The primary outcome is steps per day recorded using a pedometer (Yamax Digiwalker CW-701) for 7 consecutive days at baseline, 6, 12 and 18 months. Secondary outcomes include health, mental wellbeing, quality of life, work absenteeism and presenteeism, and use of healthcare resources. Process measures will assess intervention “dose”, website usage, and intervention fidelity. An economic evaluation will be conducted from the National Health Service, employer and retailer perspective using both a cost-utility and cost-effectiveness framework. The inclusion of a discrete choice experiment will further generate values for a cost-benefit analysis. Participant focus groups will explore who the intervention worked for and why, and interviews with retailers will elucidate their views on the sustainability of a public health focused loyalty card scheme.
Discussion
The study is designed to maximise the potential for roll-out in similar settings, by engaging the public sector and business community in designing and delivering the intervention. We have developed a sustainable business model using a ‘points’ based loyalty platform, whereby local businesses ‘sponsor’ the incentive (retail vouchers) in return for increased footfall to their business.
Resumo:
Physics-based synthesis of tanpura drones requires accurate simulation of stiff, lossy string vibrations while incorporating sustained contact with the bridge and a cotton thread. Several challenges arise from this when seeking efficient and stable algorithms for real-time sound synthesis. The approach proposed here to address these combines modal expansion of the string dynamics with strategic simplifications regarding the string-bridge and string-thread contact, resulting in an efficient and provably stable time-stepping scheme with exact modal parameters. Attention is given also to the physical characterisation of the system, including string damping behaviour, body radiation characteristics, and determination of appropriate contact parameters. Simulation results are presented exemplifying the key features of the model.
Resumo:
The purpose of the study was to explore how a public, IT services transferor, organization, comprised of autonomous entities, can effectively develop and organize its data center cost recovery mechanisms in a fair manner. The lack of a well-defined model for charges and a cost recovery scheme could cause various problems. For example one entity may be subsidizing the costs of another entity(s). Transfer pricing is in the best interest of each autonomous entity in a CCA. While transfer pricing plays a pivotal role in the price settings of services and intangible assets, TCE focuses on the arrangement at the boundary between entities. TCE is concerned with the costs, autonomy, and cooperation issues of an organization. The theory is concern with the factors that influence intra-firm transaction costs and attempting to manifest the problems involved in the determination of the charges or prices of the transactions. This study was carried out, as a single case study, in a public organization. The organization intended to transfer the IT services of its own affiliated public entities and was in the process of establishing a municipal-joint data center. Nine semi-structured interviews, including two pilot interviews, were conducted with the experts and managers of the case company and its affiliating entities. The purpose of these interviews was to explore the charging and pricing issues of the intra-firm transactions. In order to process and summarize the findings, this study employed qualitative techniques with the multiple methods of data collection. The study, by reviewing the TCE theory and a sample of transfer pricing literature, created an IT services pricing framework as a conceptual tool for illustrating the structure of transferring costs. Antecedents and consequences of the transfer price based on TCE were developed. An explanatory fair charging model was eventually developed and suggested. The findings of the study suggested that the Chargeback system was inappropriate scheme for an organization with affiliated autonomous entities. The main contribution of the study was the application of TP methodologies in the public sphere with no tax issues consideration.
Resumo:
As the formative agents of cloud droplets, aerosols play an undeniably important role in the development of clouds and precipitation. Few meteorological models have been developed or adapted to simulate aerosols and their contribution to cloud and precipitation processes. The Weather Research and Forecasting model (WRF) has recently been coupled with an atmospheric chemistry suite and is jointly referred to as WRF-Chem, allowing atmospheric chemistry and meteorology to influence each other’s evolution within a mesoscale modeling framework. Provided that the model physics are robust, this framework allows the feedbacks between aerosol chemistry, cloud physics, and dynamics to be investigated. This study focuses on the effects of aerosols on meteorology, specifically, the interaction of aerosol chemical species with microphysical processes represented within the framework of the WRF-Chem. Aerosols are represented by eight size bins using the Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) sectional parameterization, which is linked to the Purdue Lin bulk microphysics scheme. The aim of this study is to examine the sensitivity of deep convective precipitation modeled by the 2D WRF-Chem to varying aerosol number concentration and aerosol type. A systematic study has been performed regarding the effects of aerosols on parameters such as total precipitation, updraft/downdraft speed, distribution of hydrometeor species, and organizational features, within idealized maritime and continental thermodynamic environments. Initial results were obtained using WRFv3.0.1, and a second series of tests were run using WRFv3.2 after several changes to the activation, autoconversion, and Lin et al. microphysics schemes added by the WRF community, as well as the implementation of prescribed vertical levels by the author. The results of WRFv3.2 runs contrasted starkly with WRFv3.0.1 runs. The WRFv3.0.1 runs produced a propagating system resembling a developing squall line, whereas the WRFv3.2 runs did not. The response of total precipitation, updraft/downdraft speeds, and system organization to increasing aerosol concentrations were opposite between runs with different versions of WRF. Results of the WRFv3.2 runs, however, were in better agreement in timing and magnitude of vertical velocity and hydrometeor content with a WRFv3.0.1 run using single-moment Lin et al. microphysics, than WRFv3.0.1 runs with chemistry. One result consistent throughout all simulations was an inhibition in warm-rain processes due to enhanced aerosol concentrations, which resulted in a delay of precipitation onset that ranged from 2-3 minutes in WRFv3.2 runs, and up to 15 minutes in WRFv.3.0.1 runs. This result was not observed in a previous study by Ntelekos et al. (2009) using the WRF-Chem, perhaps due to their use of coarser horizontal and vertical resolution within their experiment. The changes to microphysical processes such as activation and autoconversion from WRFv3.0.1 to WRFv3.2, along with changes in the packing of vertical levels, had more impact than the varying aerosol concentrations even though the range of aerosol tested was greater than that observed in field studies. In order to take full advantage of the input of aerosols now offered by the chemistry module in WRF, the author recommends that a fully double-moment microphysics scheme be linked, rather than the limited double-moment Lin et al. scheme that currently exists. With this modification, the WRF-Chem will be a powerful tool for studying aerosol-cloud interactions and allow comparison of results with other studies using more modern and complex microphysical parameterizations.
Resumo:
The next generation of vehicles will be equipped with automated Accident Warning Systems (AWSs) capable of warning neighbouring vehicles about hazards that might lead to accidents. The key enabling technology for these systems is the Vehicular Ad-hoc Networks (VANET) but the dynamics of such networks make the crucial timely delivery of warning messages challenging. While most previously attempted implementations have used broadcast-based data dissemination schemes, these do not cope well as data traffic load or network density increases. This problem of sending warning messages in a timely manner is addressed by employing a network coding technique in this thesis. The proposed NETwork COded DissEmination (NETCODE) is a VANET-based AWS responsible for generating and sending warnings to the vehicles on the road. NETCODE offers an XOR-based data dissemination scheme that sends multiple warning in a single transmission and therefore, reduces the total number of transmissions required to send the same number of warnings that broadcast schemes send. Hence, it reduces contention and collisions in the network improving the delivery time of the warnings. The first part of this research (Chapters 3 and 4) asserts that in order to build a warning system, it is needful to ascertain the system requirements, information to be exchanged, and protocols best suited for communication between vehicles. Therefore, a study of these factors along with a review of existing proposals identifying their strength and weakness is carried out. Then an analysis of existing broadcast-based warning is conducted which concludes that although this is the most straightforward scheme, loading can result an effective collapse, resulting in unacceptably long transmission delays. The second part of this research (Chapter 5) proposes the NETCODE design, including the main contribution of this thesis, a pair of encoding and decoding algorithms that makes the use of an XOR-based technique to reduce transmission overheads and thus allows warnings to get delivered in time. The final part of this research (Chapters 6--8) evaluates the performance of the proposed scheme as to how it reduces the number of transmissions in the network in response to growing data traffic load and network density and investigates its capacity to detect potential accidents. The evaluations use a custom-built simulator to model real-world scenarios such as city areas, junctions, roundabouts, motorways and so on. The study shows that the reduction in the number of transmissions helps reduce competition in the network significantly and this allows vehicles to deliver warning messages more rapidly to their neighbours. It also examines the relative performance of NETCODE when handling both sudden event-driven and longer-term periodic messages in diverse scenarios under stress caused by increasing numbers of vehicles and transmissions per vehicle. This work confirms the thesis' primary contention that XOR-based network coding provides a potential solution on which a more efficient AWS data dissemination scheme can be built.
Resumo:
We propose a scheme in which the masses of the heavier leptons obey seesaw type relations. The light lepton masses, except the electron and the electron neutrino ones, are generated by one loop level radiative corrections. We work in a version of the 3-3-1 electroweak model that predicts singlets (charged and neutral) of heavy leptons beyond the known ones. An extra U(1)(Omega) symmetry is introduced in order to avoid the light leptons getting masses at the tree level. The electron mass induces an explicit symmetry breaking at U(1)(Omega). We discuss also the mixing matrix among four neutrinos. The new energy scale required is not higher than a few TeV.
Resumo:
We study a climatologically important interaction of two of the main components of the geophysical system by adding an energy balance model for the averaged atmospheric temperature as dynamic boundary condition to a diagnostic ocean model having an additional spatial dimension. In this work, we give deeper insight than previous papers in the literature, mainly with respect to the 1990 pioneering model by Watts and Morantine. We are taking into consideration the latent heat for the two phase ocean as well as a possible delayed term. Non-uniqueness for the initial boundary value problem, uniqueness under a non-degeneracy condition and the existence of multiple stationary solutions are proved here. These multiplicity results suggest that an S-shaped bifurcation diagram should be expected to occur in this class of models generalizing previous energy balance models. The numerical method applied to the model is based on a finite volume scheme with nonlinear weighted essentially non-oscillatory reconstruction and Runge–Kutta total variation diminishing for time integration.
Resumo:
Lithium Ion (Li-Ion) batteries have got attention in recent decades because of their undisputable advantages over other types of batteries. They are used in so many our devices which we need in our daily life such as cell phones, lap top computers, cameras, and so many electronic devices. They also are being used in smart grids technology, stand-alone wind and solar systems, Hybrid Electric Vehicles (HEV), and Plug in Hybrid Electric Vehicles (PHEV). Despite the rapid increase in the use of Lit-ion batteries, the existence of limited battery models also inadequate and very complex models developed by chemists is the lack of useful models a significant matter. A battery management system (BMS) aims to optimize the use of the battery, making the whole system more reliable, durable and cost effective. Perhaps the most important function of the BMS is to provide an estimate of the State of Charge (SOC). SOC is the ratio of available ampere-hour (Ah) in the battery to the total Ah of a fully charged battery. The Open Circuit Voltage (OCV) of a fully relaxed battery has an approximate one-to-one relationship with the SOC. Therefore, if this voltage is known, the SOC can be found. However, the relaxed OCV can only be measured when the battery is relaxed and the internal battery chemistry has reached equilibrium. This thesis focuses on Li-ion battery cell modelling and SOC estimation. In particular, the thesis, introduces a simple but comprehensive model for the battery and a novel on-line, accurate and fast SOC estimation algorithm for the primary purpose of use in electric and hybrid-electric vehicles, and microgrid systems. The thesis aims to (i) form a baseline characterization for dynamic modeling; (ii) provide a tool for use in state-of-charge estimation. The proposed modelling and SOC estimation schemes are validated through comprehensive simulation and experimental results.
Resumo:
Classification schemes are built at a particular point in time; at inception, they reflect a worldview indicative of that time. This is their strength, but results in potential weak- nesses as worldviews change. For example, if a scheme of mathematics is not updated even though the state of the art has changed, then it is not a very useful scheme to users for the purposes of information retrieval. However, change in schemes is a good thing. Changing allows designers of schemes to update their model and serves as a responsible mediator between resources and users. But change does come at a cost. In the print world, we revise universal clas- sification schemes—sometimes in drastic ways—and this means that over time, the power of a classification scheme to collocate is compromised if we do not account for scheme change in the organization of affected physical resources. If we understand this phenomenon in the print world, we can design ameliorations for the digital world.