864 resultados para stream processing crowdsensing scheduling traffic analysis


Relevância:

40.00% 40.00%

Publicador:

Resumo:

It has been well documented that traffic accidents that can be avoided occur when the motorists miss or ignore traffic signs. With the attention of drivers getting diverted due to distractions like cell phone conversations, missing traffic signs has become more prevalent. Also, poor weather and other unfriendly driving conditions sometimes makes the motorists not to be alert all the time and see every traffic sign on the road. Besides, most cars do not have any form of traffic assistance. Because of heavy traffic and proliferation of traffic signs on the roads, there is a need for a system that assists the driver not to miss a traffic sign to reduce the probability of an accident. Since visual information is critical for driving, processed video signals from cameras have been chosen to assist drivers. These inexpensive cameras can be easily mounted on the automobile. The objective of the present investigation and the traffic system development is to recognize the traffic signs electronically and alert drivers. For the case study and the system development, five important and critical traffic signs have been selected. They are: STOP, NO ENTER, NO RIGHT TURN, NO LEFT TURN, and YIELD. The system was evaluated processing still pictures taken from the public roads, and the recognition results were presented in an analysis table to indicate the correct identifications and the false ones. The system reached the acceptable recognition rate of 80% for all five traffic signs. The processing rate was about three seconds. The capabilities of MATLAB, VLSI design platforms and coding have been used to generate a visual warning to complement the visual driver support system with a Field Programmable Gate Array (FPGA) on a XUP Virtex-II Pro Development System.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This research is motivated by a practical application observed at a printed circuit board (PCB) manufacturing facility. After assembly, the PCBs (or jobs) are tested in environmental stress screening (ESS) chambers (or batch processing machines) to detect early failures. Several PCBs can be simultaneously tested as long as the total size of all the PCBs in the batch does not violate the chamber capacity. PCBs from different production lines arrive dynamically to a queue in front of a set of identical ESS chambers, where they are grouped into batches for testing. Each line delivers PCBs that vary in size and require different testing (or processing) times. Once a batch is formed, its processing time is the longest processing time among the PCBs in the batch, and its ready time is given by the PCB arriving last to the batch. ESS chambers are expensive and a bottleneck. Consequently, its makespan has to be minimized. ^ A mixed-integer formulation is proposed for the problem under study and compared to a formulation recently published. The proposed formulation is better in terms of the number of decision variables, linear constraints and run time. A procedure to compute the lower bound is proposed. For sparse problems (i.e. when job ready times are dispersed widely), the lower bounds are close to optimum. ^ The problem under study is NP-hard. Consequently, five heuristics, two metaheuristics (i.e. simulated annealing (SA) and greedy randomized adaptive search procedure (GRASP)), and a decomposition approach (i.e. column generation) are proposed—especially to solve problem instances which require prohibitively long run times when a commercial solver is used. Extensive experimental study was conducted to evaluate the different solution approaches based on the solution quality and run time. ^ The decomposition approach improved the lower bounds (or linear relaxation solution) of the mixed-integer formulation. At least one of the proposed heuristic outperforms the Modified Delay heuristic from the literature. For sparse problems, almost all the heuristics report a solution close to optimum. GRASP outperforms SA at a higher computational cost. The proposed approaches are viable to implement as the run time is very short. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Buffered crossbar switches have recently attracted considerable attention as the next generation of high speed interconnects. They are a special type of crossbar switches with an exclusive buffer at each crosspoint of the crossbar. They demonstrate unique advantages over traditional unbuffered crossbar switches, such as high throughput, low latency, and asynchronous packet scheduling. However, since crosspoint buffers are expensive on-chip memories, it is desired that each crosspoint has only a small buffer. This dissertation proposes a series of practical algorithms and techniques for efficient packet scheduling for buffered crossbar switches. To reduce the hardware cost of such switches and make them scalable, we considered partially buffered crossbars, whose crosspoint buffers can be of an arbitrarily small size. Firstly, we introduced a hybrid scheme called Packet-mode Asynchronous Scheduling Algorithm (PASA) to schedule best effort traffic. PASA combines the features of both distributed and centralized scheduling algorithms and can directly handle variable length packets without Segmentation And Reassembly (SAR). We showed by theoretical analysis that it achieves 100% throughput for any admissible traffic in a crossbar with a speedup of two. Moreover, outputs in PASA have a large probability to avoid the more time-consuming centralized scheduling process, and thus make fast scheduling decisions. Secondly, we proposed the Fair Asynchronous Segment Scheduling (FASS) algorithm to handle guaranteed performance traffic with explicit flow rates. FASS reduces the crosspoint buffer size by dividing packets into shorter segments before transmission. It also provides tight constant performance guarantees by emulating the ideal Generalized Processor Sharing (GPS) model. Furthermore, FASS requires no speedup for the crossbar, lowering the hardware cost and improving the switch capacity. Thirdly, we presented a bandwidth allocation scheme called Queue Length Proportional (QLP) to apply FASS to best effort traffic. QLP dynamically obtains a feasible bandwidth allocation matrix based on the queue length information, and thus assists the crossbar switch to be more work-conserving. The feasibility and stability of QLP were proved, no matter whether the traffic distribution is uniform or non-uniform. Hence, based on bandwidth allocation of QLP, FASS can also achieve 100% throughput for best effort traffic in a crossbar without speedup.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We examined the impact of permafrost on dissolved organic matter (DOM) composition in Caribou-Poker Creeks Research Watershed (CPCRW), a watershed underlain with discontinuous permafrost, in interior Alaska. We analyzed long term data from watersheds underlain with varying degrees of permafrost, sampled springs and thermokarsts, used fluorescence spectroscopy, and measured the bioavailabity of dissolved organic carbon (DOC). Permafrost driven patterns in hydrology and vegetation influenced DOM patterns in streams, with the stream draining the high permafrost watershed having higher DOC and dissolved organic nitrogen (DON) concentrations, higher DOC:- DON and greater specific ultraviolet absorbance (SUVA) than the streams draining the low and medium permafrost watersheds. Streams, springs and thermokarsts exhibited a wide range of DOC and DON concentrations (1.5–37.5 mgC/L and 0.14–1.26 mgN/L, respectively), DOC:DON (7.1–42.8) and SUVA (1.5–4.7 L mgC-1 m-1). All sites had a high proportion of humic components, a low proportion of protein components, and a low fluorescence index value (1.3–1.4), generally consistent with terrestrially derivedDOM. Principal component analysis revealed distinct groups in our fluorescence data determined by diagenetic processing and DOM source. The proportion of bioavailable DOC ranged from 2 to 35%, with the proportion of tyrosine- and tryptophan-like fluorophores in the DOM being a major predictor of DOC loss (p\0.05, R2 = 0.99). Our results indicate that the degradation of permafrost in CPCRW will result in a decrease in DOC and DON concentrations, a decline in DOC:DON, and a reduction in SUVA, possibly accompanied by

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This dissertation presents and evaluates a methodology for scheduling medical application workloads in virtualized computing environments. Such environments are being widely adopted by providers of "cloud computing" services. In the context of provisioning resources for medical applications, such environments allow users to deploy applications on distributed computing resources while keeping their data secure. Furthermore, higher level services that further abstract the infrastructure-related issues can be built on top of such infrastructures. For example, a medical imaging service can allow medical professionals to process their data in the cloud, easing them from the burden of having to deploy and manage these resources themselves. In this work, we focus on issues related to scheduling scientific workloads on virtualized environments. We build upon the knowledge base of traditional parallel job scheduling to address the specific case of medical applications while harnessing the benefits afforded by virtualization technology. To this end, we provide the following contributions: (1) An in-depth analysis of the execution characteristics of the target applications when run in virtualized environments. (2) A performance prediction methodology applicable to the target environment. (3) A scheduling algorithm that harnesses application knowledge and virtualization-related benefits to provide strong scheduling performance and quality of service guarantees. In the process of addressing these pertinent issues for our target user base (i.e. medical professionals and researchers), we provide insight that benefits a large community of scientific application users in industry and academia. Our execution time prediction and scheduling methodologies are implemented and evaluated on a real system running popular scientific applications. We find that we are able to predict the execution time of a number of these applications with an average error of 15%. Our scheduling methodology, which is tested with medical image processing workloads, is compared to that of two baseline scheduling solutions and we find that it outperforms them in terms of both the number of jobs processed and resource utilization by 20–30%, without violating any deadlines. We conclude that our solution is a viable approach to supporting the computational needs of medical users, even if the cloud computing paradigm is not widely adopted in its current form.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With the growing commercial importance of the Internet and the development of new real-time, connection-oriented services like IP-telephony and electronic commerce resilience is becoming a key issue in the design of TP-based networks. Two emerging technologies, which can accomplish the task of efficient information transfer, are Multiprotocol Label Switching (MPLS) and Differentiated Services. A main benefit of MPLS is the ability to introduce traffic-engineering concepts due to its connection-oriented characteristic. With MPLS it is possible to assign different paths for packets through the network. Differentiated services divides traffic into different classes and treat them differently, especially when there is a shortage of network resources. In this thesis, a framework was proposed to integrate the above two technologies and its performance in providing load balancing and improving QoS was evaluated. Simulation and analysis of this framework demonstrated that the combination of MPLS and Differentiated services is a powerful tool for QoS provisioning in IP networks.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Pipelines are one of the safest means to transport crude oil, but are not spill-free. This is of concern in North America, due to the large volumes of crude oil shipped by Canadian producers and the lengthy network of pipelines. Each pipeline crosses many rivers, supporting a wide variety of human activities, and rich aquatic life. However, there is a knowledge gap on the risks of contamination of river beds due to oil spills. This thesis addresses this knowledge gap by focussing on mechanisms that transport water (and contaminants) from the free surface flow to the bed sediments, and vice-versa. The work focuses on gravel rivers, in which bed sediments are sufficiently permeable that pressure gradients caused by the interactions of flow with topographic elements (gravel bars), or changes in direction induce exchanges of water between the free surface flow and the bed, known as hyporheic flows. The objectives of the thesis are: to present a new method to visualize and quantify hyporheic flows in laboratory experiments; to conduct a novel series of experiments on hyporheic flow induced by a gravel bar under different free surface flows. The new method to quantify hyporheic flows rests on injections of a solution of dye and water. The method yielded accurate flow lines, and reasonable estimates of the hyporheic flow velocities. The present series of experiments was carried out in a 11 m long, 0.39 m wide, and 0.41 m deep tilting flume. The gravel had a mean particle size of 7.7 mm. Different free surface flows were imposed by changing the flume slope and flow depth. Measured hyporheic flows were turbulent. Smaller free surface flow depths resulted in stronger hyporheic flows (higher velocities, and deeper dye penetration into the sediment). A significant finding is that different free surface flows (different velocities, Reynolds number, etc.) produce similar hyporheic flows as long as the downstream hydraulic gradients are similar. This suggests, that for a specified bar geometry, the characteristics of the hyporheic flows depend on the downstream hydraulic gradients, and not or only minimally on the internal dynamics of the free surface flow.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

One of the global phenomena with threats to environmental health and safety is artisanal mining. There are ambiguities in the manner in which an ore-processing facility operates which hinders the mining capacity of these miners in Ghana. These problems are reviewed on the basis of current socio-economic, health and safety, environmental, and use of rudimentary technologies which limits fair-trade deals to miners. This research sought to use an established data-driven, geographic information (GIS)-based system employing the spatial analysis approach for locating a centralized processing facility within the Wassa Amenfi-Prestea Mining Area (WAPMA) in the Western region of Ghana. A spatial analysis technique that utilizes ModelBuilder within the ArcGIS geoprocessing environment through suitability modeling will systematically and simultaneously analyze a geographical dataset of selected criteria. The spatial overlay analysis methodology and the multi-criteria decision analysis approach were selected to identify the most preferred locations to site a processing facility. For an optimal site selection, seven major criteria including proximity to settlements, water resources, artisanal mining sites, roads, railways, tectonic zones, and slopes were considered to establish a suitable location for a processing facility. Site characterizations and environmental considerations, incorporating identified constraints such as proximity to large scale mines, forest reserves and state lands to site an appropriate position were selected. The analysis was limited to criteria that were selected and relevant to the area under investigation. Saaty’s analytical hierarchy process was utilized to derive relative importance weights of the criteria and then a weighted linear combination technique was applied to combine the factors for determination of the degree of potential site suitability. The final map output indicates estimated potential sites identified for the establishment of a facility centre. The results obtained provide intuitive areas suitable for consideration

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Companies face new challenges almost every day. In order to stay competitive, it is important that companies strive for continuous development and improvement. By describing companies through their processes it is possible to get a clear overview of the entire operation, which can contribute, to a well-established overall understanding of the company. This is a case study based on Stort AB which is a small logistics company specialized in international transportation and logistics solutions. The purpose of this study is to perform value stream mapping in order to create a more efficient production process and propose possible improvements in order to reduce processing time. After performing value stream mapping, data envelopment analysis is used to calculate how lean Stort AB is today and how lean the company can become by implementing the proposed improvements. The results show that the production process can improve efficiency by minimizing waste produced by a bad workplace layout and over-processing. The authors suggested solution is to introduce standardized processes and invest in technical instruments in order to automate the process to reduce process time. According to data envelopment analysis the business is 41 percent lean at present and may soon become 55 percent lean and finally reach an optimum 100 percent lean mode if the process is automated.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The advancement of GPS technology has made it possible to use GPS devices as orientation and navigation tools, but also as tools to track spatiotemporal information. GPS tracking data can be broadly applied in location-based services, such as spatial distribution of the economy, transportation routing and planning, traffic management and environmental control. Therefore, knowledge of how to process the data from a standard GPS device is crucial for further use. Previous studies have considered various issues of the data processing at the time. This paper, however, aims to outline a general procedure for processing GPS tracking data. The procedure is illustrated step-by-step by the processing of real-world GPS data of car movements in Borlänge in the centre of Sweden.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Persistent daily congestion has been increasing in recent years, particularly along major corridors during selected periods in the mornings and evenings. On certain segments, these roadways are often at or near capacity. However, a conventional Predefined control strategy did not fit the demands that changed over time, making it necessary to implement the various dynamical lane management strategies discussed in this thesis. Those strategies include hard shoulder running, reversible HOV lanes, dynamic tolls and variable speed limit. A mesoscopic agent-based DTA model is used to simulate different strategies and scenarios. From the analyses, all strategies aim to mitigate congestion in terms of the average speed and average density. The largest improvement can be found in hard shoulder running and reversible HOV lanes while the other two provide more stable traffic. In terms of average speed and travel time, hard shoulder running is the most congested strategy for I-270 to help relieve the traffic pressure.