936 resultados para Thread safe parallel run-time
Resumo:
Optimization of adaptive traffic signal timing is one of the most complex problems in traffic control systems. This dissertation presents a new method that applies the parallel genetic algorithm (PGA) to optimize adaptive traffic signal control in the presence of transit signal priority (TSP). The method can optimize the phase plan, cycle length, and green splits at isolated intersections with consideration for the performance of both the transit and the general vehicles. Unlike the simple genetic algorithm (GA), PGA can provide better and faster solutions needed for real-time optimization of adaptive traffic signal control. ^ An important component in the proposed method involves the development of a microscopic delay estimation model that was designed specifically to optimize adaptive traffic signal with TSP. Macroscopic delay models such as the Highway Capacity Manual (HCM) delay model are unable to accurately consider the effect of phase combination and phase sequence in delay calculations. In addition, because the number of phases and the phase sequence of adaptive traffic signal may vary from cycle to cycle, the phase splits cannot be optimized when the phase sequence is also a decision variable. A "flex-phase" concept was introduced in the proposed microscopic delay estimation model to overcome these limitations. ^ The performance of PGA was first evaluated against the simple GA. The results show that PGA achieved both faster convergence and lower delay for both under- or over-saturated traffic conditions. A VISSIM simulation testbed was then developed to evaluate the performance of the proposed PGA-based adaptive traffic signal control with TSP. The simulation results show that the PGA-based optimizer for adaptive TSP outperformed the fully actuated NEMA control in all test cases. The results also show that the PGA-based optimizer was able to produce TSP timing plans that benefit the transit vehicles while minimizing the impact of TSP on the general vehicles. The VISSIM testbed developed in this research provides a powerful tool to design and evaluate different TSP strategies under both actuated and adaptive signal control. ^
Resumo:
The detailed organic composition of atmospheric fine particles with an aerodynamic diameter smaller than or equal to 2.5 micrometers (PM2.5) is an integral part of the knowledge needed in order to fully characterize its sources and transformation in the environment. For the study presented here, samples were collected at 3-hour intervals. This high time resolution allows gaining unique insights on the influence of short- and long-range transport phenomena, and dynamic atmospheric processes. A specially designed sequential sampler was deployed at the 2002-2003 Baltimore PM-Supersite to collect PM2.5 samples at a 3-hourly resolution for extended periods of consecutive days, during both summer and winter seasons. Established solvent-extraction and GC-MS techniques were used to extract and analyze the organic compounds in 119 samples from each season. Over 100 individual compounds were quantified in each sample. For primary organics, averaging the diurnal ambient concentrations over the sampled periods revealed ambient patterns that relate to diurnal emission patterns of major source classes. Several short-term releases of pollutants from local sources were detected, and local meteorological data was used to pinpoint possible source regions. Biogenic secondary organic compounds were detected as well, and possible mechanisms of formation were evaluated. The relationships between the observed continuous variations of the concentrations of selected organic markers and both the on-site meteorological measurements conducted parallel to the PM2.5 sampling, and the synoptic patterns of weather and wind conditions were also examined. Several one-to-two days episodes were identified from the sequential variation of the concentration observed for specific marker compounds and markers ratios. The influence of the meteorological events on the concentrations of the organic compounds during selected episodes was discussed. It was observed that during the summer, under conditions of pervasive influence of air masses originated from the west/northwest, some organic species displayed characteristics consistent with the measured PM2.5 being strongly influenced by the aged nature of these long-traveling background parcels. During the winter, intrusions from more regional air masses originating from the south and the southwest were more important.
Resumo:
Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.
Resumo:
The United States has over 4 million births annually. Currently healthy women with non-complicated deliveries receive little to no routine postpartum support when discharged from the hospital. This is especially problematic if mothers are first time mothers, poor, have language barriers and little to no social support after giving birth. The purpose of this randomized clinical trial was to compare maternal and infant health outcomes, and health care charges between 2 groups of mothers and newborns. A control ( n = 69) group received routine posthospital discharge care. An intervention group (n = 70) received routine posthospital discharge care plus follow up telephone calls by advanced practice nurses (APNs) on days 3,7,14,21,28 and week 8. Both groups were followed for the first 8 weeks posthospital discharge following delivery to examine maternal health outcomes (perceived maternal stress, social support and perceived maternal physical health), infant health outcomes (routine medical follow up visits immunizations, weight gain), morbidity (urgent care visits, emergency room visits, rehospitalizations), health care charges (urgent care visits, emergency room visits, rehospitalizations) in both groups and charges for APN follow up in the intervention group only. Data were analyzed using descriptive statistics and two-sample t-tests. Study findings indicated that intervention group had significantly lower perceived maternal stress, significantly higher rating of perceived maternal health and higher levels of social support and by the end of the 2nd month posthospital discharge compared to control group mothers. Infants in the intervention group had: increased number of immunizations; fewer emergency room visits; and 1 infant rehospitalization compared to 3 infant rehospitalizations in the control group. The intervention groups' health care charges were significantly lower compared to the control group $14,333/$497 vs. $70,834/$1,068. These study results indicate that an intervention of APN follow up telephone calls in this sample of first time low-income culturally diverse mothers was an effective, safe, low cost, easy to apply intervention which improved mothers' and infants' health outcomes and reduced healthcare charges.
Resumo:
Political scientists have long noted that Congressional elections are often uncompetitive, often extremely so. Many scholars argue that the cause lies in the partisan redistricting of Congressional districts, or “gerrymandering”. Other scholars emphasize polarization created by a fragmented news media, or the candidate choices made by a more ideological primary electorate. All these explanations identify the cause of party-safe elections in institutions of various kinds. This dissertation, by contrast, presents a structural explanation of uncompetitive elections. My theory is that population composition and patterns of migration are significant causes and predictors of election results in Florida. I test this theory empirically by comparing the predictions from four hypotheses against aggregate data, using the county as the unit of analysis. The first hypothesis is that Florida can be divided into clearly distinguishable, persistent partisan sections. This hypothesis is confirmed. The second hypothesis is that Florida voters have become increasingly partisan over time. This hypothesis is confirmed. The third hypothesis is that the degree of migration into a county predicts how that county will vote. This hypothesis is partially confirmed, for the migration effect appears to have waned over time. The last hypothesis is that the degree of religiosity of a county population is a predictor of how that county will vote. This hypothesis is also supported by the results of statistical analysis. By identifying the structural causes of party-safe elections, this dissertation not only broadens our understanding of elections in Florida, but also sheds light on the current polarization in American politics.
Resumo:
This dissertation presents and evaluates a methodology for scheduling medical application workloads in virtualized computing environments. Such environments are being widely adopted by providers of "cloud computing" services. In the context of provisioning resources for medical applications, such environments allow users to deploy applications on distributed computing resources while keeping their data secure. Furthermore, higher level services that further abstract the infrastructure-related issues can be built on top of such infrastructures. For example, a medical imaging service can allow medical professionals to process their data in the cloud, easing them from the burden of having to deploy and manage these resources themselves. In this work, we focus on issues related to scheduling scientific workloads on virtualized environments. We build upon the knowledge base of traditional parallel job scheduling to address the specific case of medical applications while harnessing the benefits afforded by virtualization technology. To this end, we provide the following contributions: (1) An in-depth analysis of the execution characteristics of the target applications when run in virtualized environments. (2) A performance prediction methodology applicable to the target environment. (3) A scheduling algorithm that harnesses application knowledge and virtualization-related benefits to provide strong scheduling performance and quality of service guarantees. In the process of addressing these pertinent issues for our target user base (i.e. medical professionals and researchers), we provide insight that benefits a large community of scientific application users in industry and academia. Our execution time prediction and scheduling methodologies are implemented and evaluated on a real system running popular scientific applications. We find that we are able to predict the execution time of a number of these applications with an average error of 15%. Our scheduling methodology, which is tested with medical image processing workloads, is compared to that of two baseline scheduling solutions and we find that it outperforms them in terms of both the number of jobs processed and resource utilization by 20–30%, without violating any deadlines. We conclude that our solution is a viable approach to supporting the computational needs of medical users, even if the cloud computing paradigm is not widely adopted in its current form.
Resumo:
With the exponential increasing demands and uses of GIS data visualization system, such as urban planning, environment and climate change monitoring, weather simulation, hydrographic gauge and so forth, the geospatial vector and raster data visualization research, application and technology has become prevalent. However, we observe that current web GIS techniques are merely suitable for static vector and raster data where no dynamic overlaying layers. While it is desirable to enable visual explorations of large-scale dynamic vector and raster geospatial data in a web environment, improving the performance between backend datasets and the vector and raster applications remains a challenging technical issue. This dissertation is to implement these challenging and unimplemented areas: how to provide a large-scale dynamic vector and raster data visualization service with dynamic overlaying layers accessible from various client devices through a standard web browser, and how to make the large-scale dynamic vector and raster data visualization service as rapid as the static one. To accomplish these, a large-scale dynamic vector and raster data visualization geographic information system based on parallel map tiling and a comprehensive performance improvement solution are proposed, designed and implemented. They include: the quadtree-based indexing and parallel map tiling, the Legend String, the vector data visualization with dynamic layers overlaying, the vector data time series visualization, the algorithm of vector data rendering, the algorithm of raster data re-projection, the algorithm for elimination of superfluous level of detail, the algorithm for vector data gridding and re-grouping and the cluster servers side vector and raster data caching.
Resumo:
Developing analytical models that can accurately describe behaviors of Internet-scale networks is difficult. This is due, in part, to the heterogeneous structure, immense size and rapidly changing properties of today's networks. The lack of analytical models makes large-scale network simulation an indispensable tool for studying immense networks. However, large-scale network simulation has not been commonly used to study networks of Internet-scale. This can be attributed to three factors: 1) current large-scale network simulators are geared towards simulation research and not network research, 2) the memory required to execute an Internet-scale model is exorbitant, and 3) large-scale network models are difficult to validate. This dissertation tackles each of these problems. ^ First, this work presents a method for automatically enabling real-time interaction, monitoring, and control of large-scale network models. Network researchers need tools that allow them to focus on creating realistic models and conducting experiments. However, this should not increase the complexity of developing a large-scale network simulator. This work presents a systematic approach to separating the concerns of running large-scale network models on parallel computers and the user facing concerns of configuring and interacting with large-scale network models. ^ Second, this work deals with reducing memory consumption of network models. As network models become larger, so does the amount of memory needed to simulate them. This work presents a comprehensive approach to exploiting structural duplications in network models to dramatically reduce the memory required to execute large-scale network experiments. ^ Lastly, this work addresses the issue of validating large-scale simulations by integrating real protocols and applications into the simulation. With an emulation extension, a network simulator operating in real-time can run together with real-world distributed applications and services. As such, real-time network simulation not only alleviates the burden of developing separate models for applications in simulation, but as real systems are included in the network model, it also increases the confidence level of network simulation. This work presents a scalable and flexible framework to integrate real-world applications with real-time simulation.^
Resumo:
Orthogonal Frequency-Division Multiplexing (OFDM) has been proved to be a promising technology that enables the transmission of higher data rate. Multicarrier Code-Division Multiple Access (MC-CDMA) is a transmission technique which combines the advantages of both OFDM and Code-Division Multiplexing Access (CDMA), so as to allow high transmission rates over severe time-dispersive multi-path channels without the need of a complex receiver implementation. Also MC-CDMA exploits frequency diversity via the different subcarriers, and therefore allows the high code rates systems to achieve good Bit Error Rate (BER) performances. Furthermore, the spreading in the frequency domain makes the time synchronization requirement much lower than traditional direct sequence CDMA schemes. There are still some problems when we use MC-CDMA. One is the high Peak-to-Average Power Ratio (PAPR) of the transmit signal. High PAPR leads to nonlinear distortion of the amplifier and results in inter-carrier self-interference plus out-of-band radiation. On the other hand, suppressing the Multiple Access Interference (MAI) is another crucial problem in the MC-CDMA system. Imperfect cross-correlation characteristics of the spreading codes and the multipath fading destroy the orthogonality among the users, and then cause MAI, which produces serious BER degradation in the system. Moreover, in uplink system the received signals at a base station are always asynchronous. This also destroys the orthogonality among the users, and hence, generates MAI which degrades the system performance. Besides those two problems, the interference should always be considered seriously for any communication system. In this dissertation, we design a novel MC-CDMA system, which has low PAPR and mitigated MAI. The new Semi-blind channel estimation and multi-user data detection based on Parallel Interference Cancellation (PIC) have been applied in the system. The Low Density Parity Codes (LDPC) has also been introduced into the system to improve the performance. Different interference models are analyzed in multi-carrier communication systems and then the effective interference suppression for MC-CDMA systems is employed in this dissertation. The experimental results indicate that our system not only significantly reduces the PAPR and MAI but also effectively suppresses the outside interference with low complexity. Finally, we present a practical cognitive application of the proposed system over the software defined radio platform.
Resumo:
Orthogonal Frequency-Division Multiplexing (OFDM) has been proved to be a promising technology that enables the transmission of higher data rate. Multicarrier Code-Division Multiple Access (MC-CDMA) is a transmission technique which combines the advantages of both OFDM and Code-Division Multiplexing Access (CDMA), so as to allow high transmission rates over severe time-dispersive multi-path channels without the need of a complex receiver implementation. Also MC-CDMA exploits frequency diversity via the different subcarriers, and therefore allows the high code rates systems to achieve good Bit Error Rate (BER) performances. Furthermore, the spreading in the frequency domain makes the time synchronization requirement much lower than traditional direct sequence CDMA schemes. There are still some problems when we use MC-CDMA. One is the high Peak-to-Average Power Ratio (PAPR) of the transmit signal. High PAPR leads to nonlinear distortion of the amplifier and results in inter-carrier self-interference plus out-of-band radiation. On the other hand, suppressing the Multiple Access Interference (MAI) is another crucial problem in the MC-CDMA system. Imperfect cross-correlation characteristics of the spreading codes and the multipath fading destroy the orthogonality among the users, and then cause MAI, which produces serious BER degradation in the system. Moreover, in uplink system the received signals at a base station are always asynchronous. This also destroys the orthogonality among the users, and hence, generates MAI which degrades the system performance. Besides those two problems, the interference should always be considered seriously for any communication system. In this dissertation, we design a novel MC-CDMA system, which has low PAPR and mitigated MAI. The new Semi-blind channel estimation and multi-user data detection based on Parallel Interference Cancellation (PIC) have been applied in the system. The Low Density Parity Codes (LDPC) has also been introduced into the system to improve the performance. Different interference models are analyzed in multi-carrier communication systems and then the effective interference suppression for MC-CDMA systems is employed in this dissertation. The experimental results indicate that our system not only significantly reduces the PAPR and MAI but also effectively suppresses the outside interference with low complexity. Finally, we present a practical cognitive application of the proposed system over the software defined radio platform.
Resumo:
Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.
Resumo:
The detailed organic composition of atmospheric fine particles with an aerodynamic diameter smaller than or equal to 2.5 micrometers (PM 2.5) is an integral part of the knowledge needed in order to fully characterize its sources and transformation in the environment. For the study presented here, samples were collected at 3-hour intervals. This high time resolution allows gaining unique insights on the influence of short- and long-range transport phenomena, and dynamic atmospheric processes. A specially designed sequential sampler was deployed at the 2002-2003 Baltimore PM Supersite to collect PM2.5 samples at a 3-hourly resolution for extended periods of consecutive days, during both summer and winter seasons. Established solvent-extraction and GC-MS techniques were used to extract and analyze the organic compounds in 119 samples from each season. Over 100 individual compounds were quantified in each sample. For primary organics, averaging the diurnal ambient concentrations over the sampled periods revealed ambient patterns that relate to diurnal emission patterns of major source classes. Several short-term releases of pollutants from local sources were detected, and local meteorological data was used to pinpoint possible source regions. Biogenic secondary organic compounds were detected as well, and possible mechanisms of formation were evaluated. The relationships between the observed continuous variations of the concentrations of selected organic markers and both the on-site meteorological measurements conducted parallel to the PM2.5 sampling, and the synoptic patterns of weather and wind conditions were also examined. Several one-to-two days episodes were identified from the sequential variation of the concentration observed for specific marker compounds and markers ratios. The influence of the meteorological events on the concentrations of the organic compounds during selected episodes was discussed. It was observed that during the summer, under conditions of pervasive influence of air masses originated from the west/northwest, some organic species displayed characteristics consistent with the measured PM2.5 being strongly influenced by the aged nature of these long-traveling background parcels. During the winter, intrusions from more regional air masses originating from the south and the southwest were more important.
Resumo:
Past river run-off is an important measure for the continental hydrological cycle and the as-sessment of freshwater input into the ocean. However, paleosalinity reconstructions applying different proxies in parallel often show offsets between the respective methods. Here, we compare the established foraminiferal Ba/Ca and d18OWATER salinity proxies for their capability to record the highly seasonal Orinoco freshwater plume in the eastern Caribbean. For this purpose we obtained a data set comprising Ba/Ca and d18OWATER determined on multiple spe-cies of planktonic foraminifera from core tops distributed around the Orinoco river mouth. Our findings indicate that interpretations based on either proxy could lead to different conclu-sions. In particular, Ba/Ca and d18OWATER diverge in their spatial distribution due to different governing factors. Apparently, the Orinoco freshwater plume is best tracked by Ba/Ca ratios of G. ruber (pink and sensu lato morphotypes), while d18OWATER based on the same species is more related to the local precipitation-evaporation balance overprinting the riverine freshwater contribution. Other shallow dwelling species (G. sacculifer, O. universa) show a muted response to the freshwater discharge, most likely due to their ecological and habitat prefer-ences. Extremely high Ba/Ca ratios recorded by G. ruber are attributed to Ba2+-desorption from suspended matter derived from the Orinoco. Samples taken most proximal to the freshwater source do not show pronounced Ba/Ca or d18OWATER anomalies. Here, the suspension loaded freshwater lid developing during maximum discharge suppresses foraminiferal populations. Both proxies are therefore biased towards dry season conditions at these sites, when surface salinity is only minimally reduced.
Resumo:
This paper contributes to the literature by empirically examining whether the influence of public debt on economic growth differs between the short and the long run and presents different patterns across euro-area countries. To this end, we use annual data from both central and peripheral countries of the European Economic and Monetary Union (EMU) for the 1960-2012 period and estimate a growth model augmented for public debt using the Autoregressive Distributed Lag (ARDL) bounds testing approach. Our findings tend to support the view that public debt always has a negative impact on the long-run performance of EMU countries, whilst its short-run effect may be positive depending on the country.
Resumo:
Date of Acceptance: 10/07/2015 The Chief Scientist Office of the Scottish Government Health and Social Care Directorates funds HERU. The survey was jointly funded by NHS Health Scotland and the Glasgow Centre for Population Health. The views expressed in this paper are those of the authors only and not those of the funding bodies. The investigator team for the overall survey comprises David Walsh, Gerry McCartney, Sarah McCullough, Marjon van der Pol, Duncan Buchanan and Russell Jones.