967 resultados para Delivery vehicle


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The vehicle navigation problem studied in Bell (2009) is revisited and a time-dependent reverse Hyperstar algorithm is presented. This minimises the expected time of arrival at the destination, and all intermediate nodes, where expectation is based on a pessimistic (or risk-averse) view of unknown link delays. This may also be regarded as a hyperpath version of the Chabini and Lan (2002) algorithm, which itself is a time-dependent A* algorithm. Links are assigned undelayed travel times and maximum delays, both of which are potentially functions of the time of arrival at the respective link. The driver seeks probabilities for link use that minimise his/her maximum exposure to delay on the approach to each node, leading to the determination of the pessimistic expected time of arrival. Since the context considered is vehicle navigation where the driver is not making repeated trips, the probability of link use may be interpreted as a measure of link attractiveness, so a link with a zero probability of use is unattractive while a link with a probability of use equal to one will have no attractive alternatives. A solution algorithm is presented and proven to solve the problem provided the node potentials are feasible and a FIFO condition applies for undelayed link travel times. The paper concludes with a numerical example.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This document describes a large set of Benchmark Problem Instances for the Rich Vehicle Routing Problem. All files are supplied as a single compressed (zipped) archive containing the instances, in XML format, an Object-Oriented Model supplied in XSD format, documentation and an XML parser written in Java to ease use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper explores the organisational experiences of governmental policy change and implementation on the third sector. Using a four-year longitudinal study of 13 third sector organisations (TSOs) it provides evidence based on the experiences of, and effects on, third sector organisations involved in the UK’s Work Programme in Scotland. The paper explores third sector experiences of the Work Programme during the preparation and introductory phase, as well as the effects of subsequent Work Programme implementation. By gathering evidence contemporaneously and longitudinally a unique in-depth analysis is provided of the introduction and implementation of a major new policy. The resource cost and challenges to third sector ways of working for the organisations in the Work Programme supply chain, as well as those not in the supply chain, are considered. The paper considers some of the responses adopted by the third sector to manage the opportunities and challenges presented to them through the implementation of the Work Programme. The paper also reflects on the broader context of the employability services landscape and raises questions as to whether, as a result of the manner in which the Work Programme was contracted, there is evidence of a move towards service homogenisation, challenging perceived TSO characteristics of service innovation and personalisation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The loss of working-aged adults to HIV/AIDS has been shown to increase the costs of labor to the private sector in Africa. There is little corresponding evidence for the public sector. This study evaluated the impact of AIDS on the capacity of a government agency, the Zambia Wildlife Authority (ZAWA), to patrol Zambia’s national parks. Methods: Data were collected from ZAWA on workforce characteristics, recent mortality, costs, and the number of days spent on patrol between 2003 and 2005 by a sample of 76 current patrol officers (reference subjects) and 11 patrol officers who died of AIDS or suspected AIDS (index subjects). An estimate was made of the impact of AIDS on service delivery capacity and labor costs and the potential net benefits of providing treatment. Results: Reference subjects spent an average of 197.4 days on patrol per year. After adjusting for age, years of service, and worksite, index subjects spent 62.8 days on patrol in their last year of service (68% decrease, p<0.0001), 96.8 days on patrol in their second to last year of service (51% decrease, p<0.0001), and 123.7 days on patrol in their third to last year of service (37% decrease, p<0.0001). For each employee who died, ZAWA lost an additional 111 person-days for management, funeral attendance, vacancy, and recruitment and training of a replacement, resulting in a total productivity loss per death of 2.0 person-years. Each AIDS-related death also imposed budgetary costs for care, benefits, recruitment, and training equivalent to 3.3 years’ annual compensation. In 2005, AIDS reduced service delivery capacity by 6.2% and increased labor costs by 9.7%. If antiretroviral therapy could be provided for $500/patient/year, net savings to ZAWA would approach $285,000/year. Conclusion: AIDS is constraining ZAWA’s ability to protect Zambia’s wildlife and parks. Impacts on this government agency are substantially larger than have been observed in the private sector. Provision of ART would result in net budgetary savings to ZAWA and greatly increase its service delivery capacity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Internet streaming applications are adversely affected by network conditions such as high packet loss rates and long delays. This paper aims at mitigating such effects by leveraging the availability of client-side caching proxies. We present a novel caching architecture (and associated cache management algorithms) that turn edge caches into accelerators of streaming media delivery. A salient feature of our caching algorithms is that they allow partial caching of streaming media objects and joint delivery of content from caches and origin servers. The caching algorithms we propose are both network-aware and stream-aware; they take into account the popularity of streaming media objects, their bit-rate requirements, and the available bandwidth between clients and servers. Using realistic models of Internet bandwidth (derived from proxy cache logs and measured over real Internet paths), we have conducted extensive simulations to evaluate the performance of various cache management alternatives. Our experiments demonstrate that network-aware caching algorithms can significantly reduce service delay and improve overall stream quality. Also, our experiments show that partial caching is particularly effective when bandwidth variability is not very high.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To serve asynchronous requests using multicast, two categories of techniques, stream merging and periodic broadcasting have been proposed. For sequential streaming access where requests are uninterrupted from the beginning to the end of an object, these techniques are highly scalable: the required server bandwidth for stream merging grows logarithmically as request arrival rate, and the required server bandwidth for periodic broadcasting varies logarithmically as the inverse of start-up delay. However, sequential access is inappropriate to model partial requests and client interactivity observed in various streaming access workloads. This paper analytically and experimentally studies the scalability of multicast delivery under a non-sequential access model where requests start at random points in the object. We show that the required server bandwidth for any protocols providing immediate service grows at least as the square root of request arrival rate, and the required server bandwidth for any protocols providing delayed service grows linearly with the inverse of start-up delay. We also investigate the impact of limited client receiving bandwidth on scalability. We optimize practical protocols which provide immediate service to non-sequential requests. The protocols utilize limited client receiving bandwidth, and they are near-optimal in that the required server bandwidth is very close to its lower bound.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Overlay networks have emerged as a powerful and highly flexible method for delivering content. We study how to optimize throughput of large, multipoint transfers across richly connected overlay networks, focusing on the question of what to put in each transmitted packet. We first make the case for transmitting encoded content in this scenario, arguing for the digital fountain approach which enables end-hosts to efficiently restitute the original content of size n from a subset of any n symbols from a large universe of encoded symbols. Such an approach affords reliability and a substantial degree of application-level flexibility, as it seamlessly tolerates packet loss, connection migration, and parallel transfers. However, since the sets of symbols acquired by peers are likely to overlap substantially, care must be taken to enable them to collaborate effectively. We provide a collection of useful algorithmic tools for efficient estimation, summarization, and approximate reconciliation of sets of symbols between pairs of collaborating peers, all of which keep messaging complexity and computation to a minimum. Through simulations and experiments on a prototype implementation, we demonstrate the performance benefits of our informed content delivery mechanisms and how they complement existing overlay network architectures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of delivering popular streaming media to a large number of asynchronous clients. We propose and evaluate a cache-and-relay end-system multicast approach, whereby a client joining a multicast session caches the stream, and if needed, relays that stream to neighboring clients which may join the multicast session at some later time. This cache-and-relay approach is fully distributed, scalable, and efficient in terms of network link cost. In this paper we analytically derive bounds on the network link cost of our cache-and-relay approach, and we evaluate its performance under assumptions of limited client bandwidth and limited client cache capacity. When client bandwidth is limited, we show that although finding an optimal solution is NP-hard, a simple greedy algorithm performs surprisingly well in that it incurs network link costs that are very close to a theoretical lower bound. When client cache capacity is limited, we show that our cache-and-relay approach can still significantly reduce network link cost. We have evaluated our cache-and-relay approach using simulations over large, synthetic random networks, power-law degree networks, and small-world networks, as well as over large real router-level Internet maps.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The advent of virtualization and cloud computing technologies necessitates the development of effective mechanisms for the estimation and reservation of resources needed by content providers to deliver large numbers of video-on-demand (VOD) streams through the cloud. Unfortunately, capacity planning for the QoS-constrained delivery of a large number of VOD streams is inherently difficult as VBR encoding schemes exhibit significant bandwidth variability. In this paper, we present a novel resource management scheme to make such allocation decisions using a mixture of per-stream reservations and an aggregate reservation, shared across all streams to accommodate peak demands. The shared reservation provides capacity slack that enables statistical multiplexing of peak rates, while assuring analytically bounded frame-drop probabilities, which can be adjusted by trading off buffer space (and consequently delay) and bandwidth. Our two-tiered bandwidth allocation scheme enables the delivery of any set of streams with less bandwidth (or equivalently with higher link utilization) than state-of-the-art deterministic smoothing approaches. The algorithm underlying our proposed frame-work uses three per-stream parameters and is linear in the number of servers, making it particularly well suited for use in an on-line setting. We present results from extensive trace-driven simulations, which confirm the efficiency of our scheme especially for small buffer sizes and delay bounds, and which underscore the significant realizable bandwidth savings, typically yielding losses that are an order of magnitude or more below our analytically derived bounds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a new technique for efficiently delivering popular content from information repositories with bounded file caches. Our strategy relies on the use of fast erasure codes (a.k.a. forward error correcting codes) to generate encodings of popular files, of which only a small sliding window is cached at any time instant, even to satisfy an unbounded number of asynchronous requests for the file. Our approach capitalizes on concurrency to maximize sharing of state across different request threads while minimizing cache memory utilization. Additional reduction in resource requirements arises from providing for a lightweight version of the network stack. In this paper, we describe the design and implementation of our Cyclone server as a Linux kernel subsystem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this thesis was to improve the dissolution rate of the poorly waters-soluble drug, fenofibrate by processing it with a high surface area carrier, mesoporous silica. The subsequent properties of the drug – silica composite were studied in terms of drug distribution within the silica matrix, solid state and release properties. Prior to commencing any experimental work, the properties of unprocessed mesoporous silica and fenofibrate were characterised (chapter 3), this allowed for comparison with the processed samples studied in later chapters. Fenofibrate was a highly stable, crystalline drug that did not adsorb moisture, even under long term accelerated storage conditions. It maintained its crystallinity even after SC-CO2 processing. Its dissolution rate was limited and dependent on the characteristics of the particular in vitro media studied. Mesoporous silica had a large surface area and mesopore volume and readily picked up moisture when stored under long term accelerated storage conditions (75% RH, 40 oC). It maintained its mesopore character after SC-CO2 processing. A variety of methods were employed to process fenofibrate with mesoporous silica including physical mixing, melt method, solvent impregnation and novel methods such as liquid and supercritical carbon dioxide (SC-CO2) (chapter 4). It was found that it was important to break down the fenofibrate particulate structure to a molecular state to enable drug molecules enter into the silica mesopores. While all processing methods led to some increase in fenofibrate release properties; the impregnation, liquid and SC-CO2 methods produced the most rapid release rates. SC-CO2 processing was further studied with a view to optimising the processing parameters to achieve the highest drug-loading efficiency possible (chapter 5). In this thesis, it was that SC-CO2 processing pressure had a bearing on drug-loading efficiency. Neither pressure, duration or depressurisation rate affected drug solid state or release properties. The amount of drug that could be loaded onto to the mesoporous silica successfully was also investigated at different ratios of drug mass to silica surface area under constant SC-CO2 conditions; as the drug – silica ratio increased, the drug-loading efficiency decreased, while there was no effect on drug solid state or release properties. The influence of the number of drug-loading steps was investigated (chapter 6) with a view to increasing the drug-loading efficiency. This multiple step approach did not yield an increase in drug-loading efficiency compared to the single step approach. It was also an objective in this chapter to understand how much drug could be loaded into silica mesopores; a method based on the known volume of the mesopores and true density of drug was investigated. However, this approach led to serious repercussions in terms of the subsequent solid state nature of the drug and its release performance; there was significant drug crystallinity and reduced release extent. The impact of in vitro release media on fenofibrate release was also studied (chapter 6). Here it was seen that media containing HCl led to reduced drug release over time compared to equivalent media not containing HCl. The key findings of this thesis are discussed in chapter 7 and included: 1. Drug – silica processing method strongly influenced drug distribution within the silica matrix, drug solid state and release. 2. The silica surface area and mesopore volume also influenced how much drug could be loaded. It was shown that SC-CO2 processing variables such as processing pressure (13.79 – 41.37 MPa), duration time (4 – 24 h) and depressurisation rate (rapid or controlled) did not influence the drug distribution within the SBA- 15 matrix, drug solid state form or release. Possible avenues of research to be considered going forward include the development and application of high resolution imaging techniques to visualise drug molecules within the silica mesopores. Also, the issues surrounding SBA-15 usage in a pharmaceutical manufacturing environment should be addressed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Depression is among the leading causes of disability worldwide. Currently available antidepressant drugs have unsatisfactory efficacy, with up to 60% of depressed patients failing to respond adequately to treatment. Emerging evidence has highlighted a potential role for the efflux transporter P-glycoprotein (P-gp), expressed at the blood-brain barrier (BBB), in the aetiology of treatment-resistant depression. In this thesis, the potential of P-gp inhibition as a strategy to enhance the brain distribution and pharmacodynamic effects of antidepressant drugs was investigated. Pharmacokinetic studies demonstrated that administration of the P-gp inhibitors verapamil or cyclosporin A (CsA) enhanced the BBB transport of the antidepressants imipramine and escitalopram in vivo. Furthermore, both imipramine and escitalopram were identified as transported substrates of human P-gp in vitro. Contrastingly, human P-gp exerted no effect on the transport of four other antidepressants (amitriptyline, duloxetine, fluoxetine and mirtazapine) in vitro. Pharmacodynamic studies revealed that pre-treatment with verapamil augmented the behavioural effects of escitalopram in the tail suspension test (TST) of antidepressant-like activity in mice. Moreover, pre-treatment with CsA exacerbated the behavioural manifestation of an escitalopram-induced mouse model of serotonin syndrome, a serious adverse reaction associated with serotonergic drugs. This finding highlights the potential for unwanted side-effects which may occur due to increasing brain levels of antidepressants by P-gp inhibition, although further studies are needed to fully elucidate the mechanism(s) at play. Taken together, the research outlined in this thesis indicates that P-gp may restrict brain concentrations of escitalopram and imipramine in patients. Moreover, we show that increasing the brain distribution of an antidepressant by P-gp inhibition can result in an augmentation of antidepressant-like activity in vivo. These findings raise the possibility that P-gp inhibition may represent a potentially beneficial strategy to augment antidepressant treatment in clinical practice. Further studies are now warranted to evaluate the safety and efficacy of this approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Huntington’s Disease (HD) is a rare autosomal dominant neurodegenerative disease caused by the expression of a mutant Huntingtin (muHTT) protein. Therefore, preventing the expression of muHTT by harnessing the specificity of the RNA interference (RNAi) pathway is a key research avenue for developing novel therapies for HD. However, the biggest caveat in the RNAi approach is the delivery of short interfering RNA (siRNAs) to neurons, which are notoriously difficult to transfect. Indeed, despite the great advances in the field of nanotechnology, there remains a great need to develop more effective and less toxic carriers for siRNA delivery to the Central Nervous System (CNS). Thus, the aim of this thesis was to investigate the utility of modified amphiphilic β-cyclodextrins (CDs), oligosaccharide-based molecules, as non-viral vectors for siRNA delivery for HD. Modified CDs were able to bind and complex siRNAs forming nanoparticles capable of delivering siRNAs to ST14A-HTT120Q cells and to human HD fibroblasts, and reducing the expression of the HTT gene in these in vitro models of HD. Moreover, direct administration of CD.siRNA nanoparticles into the R6/2 mouse brain resulted in significant HTT gene expression knockdown and selective alleviation of rotarod motor deficits in this mouse model of HD. In contrast to widely used transfection reagents, CD.siRNA nanoparticles only induced limited cytotoxic and neuroinflammatory responses in multiple brain-derived cell-lines, and also in vivo after single direct injections into the mouse brain. Alternatively, we have also described a PEGylation-based formulation approach to further stabilise CD.siRNA nanoparticles and progress towards a systemic delivery nanosystem. Resulting PEGylated CD.siRNA nanoparticles showed increased stability in physiological saltconditions and, to some extent, reduced protein-induced aggregation. Taken together, the work outlined in this thesis identifies modified CDs as effective, safe and versatile siRNA delivery systems that hold great potential for the treatment of CNS disorders, such as HD.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Structural Health Monitoring (SHM) is an integral part of infrastructure maintenance and management systems due to socio-economic, safety and security reasons. The behaviour of a structure under vibration depends on structure characteristics. The change of structure characteristics may suggest the change in system behaviour due to the presence of damage(s) within. Therefore the consistent, output signal guided, and system dependable markers would be convenient tool for the online monitoring, the maintenance, rehabilitation strategies, and optimized decision making policies as required by the engineers, owners, managers, and the users from both safety and serviceability aspects. SHM has a very significant advantage over traditional investigations where tangible and intangible costs of a very high degree are often incurred due to the disruption of service. Additionally, SHM through bridge-vehicle interaction opens up opportunities for continuous tracking of the condition of the structure. Research in this area is still in initial stage and is extremely promising. This PhD focuses on using bridge-vehicle interaction response for SHM of damaged or deteriorating bridges to monitor or assess them under operating conditions. In the present study, a number of damage detection markers have been investigated and proposed in order to identify the existence, location, and the extent of an open crack in the structure. The theoretical and experimental investigation has been conducted on Single Degree of Freedom linear system, simply supported beams. The novel Delay Vector Variance (DVV) methodology has been employed for characterization of structural behaviour by time-domain response analysis. Also, the analysis of responses of actual bridges using DVV method has been for the first time employed for this kind of investigation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is concerned with inductive charging of electric vehicle batteries. Rectified power form the 50/60 Hz utility feeds a dc-ac converter which delivers high-frequency ac power to the electric vehicle inductive coupling inlet. The inlet configuration has been defined by the Society of Automotive Engineers in Recommended Practice J-1773. This thesis studies converter topologies related to the series resonant converter. When coupled to the vehicle inlet, the frequency-controlled series-resonant converter results in a capacitively-filtered series-parallel LCLC (SP-LCLC) resonant converter topology with zero voltage switching and many other desirable features. A novel time-domain transformation analysis, termed Modal Analysis, is developed, using a state variable transformation, to analyze and characterize this multi-resonant fourth-orderconverter. Next, Fundamental Mode Approximation (FMA) Analysis, based on a voltage-source model of the load, and its novel extension, Rectifier-Compensated FMA (RCFMA) Analysis, are developed and applied to the SP-LCLC converter. The RCFMA Analysis is a simpler and more intuitive analysis than the Modal Analysis, and provides a relatively accurate closed-form solution for the converter behavior. Phase control of the SP-LCLC converter is investigated as a control option. FMA and RCFMA Analyses are used for detailed characterization. The analyses identify areas of operation, which are also validated experimentally, where it is advantageous to phase control the converter. A novel hybrid control scheme is proposed which integrates frequency and phase control and achieves reduced operating frequency range and improved partial-load efficiency. The phase-controlled SP-LCLC converter can also be configured with a parallel load and is an excellent option for the application. The resulting topology implements soft-switching over the entire load range and has high full-load and partial-load efficiencies. RCFMA Analysis is used to analyze and characterize the new converter topology, and good correlation is shown with experimental results. Finally, a novel single-stage power-factor-corrected ac-dc converter is introduced, which uses the current-source characteristic of the SP-LCLC topology to provide power factor correction over a wide output power range from zero to full load. This converter exhibits all the advantageous characteristics of its dc-dc counterpart, with a reduced parts count and cost. Simulation and experimental results verify the operation of the new converter.