926 resultados para clean and large throughput differential pumping system
Resumo:
What role do state party organizations play in twenty-first century American politics? What is the nature of the relationship between the state and national party organizations in contemporary elections? These questions frame the three studies presented in this dissertation. More specifically, I examine the organizational development of the state party organizations and the strategic interactions and connections between the state and national party organizations in contemporary elections.
In the first empirical chapter, I argue that the Internet Age represents a significant transitional period for state party organizations. Using data collected from surveys of state party leaders, this chapter reevaluates and updates existing theories of party organizational strength and demonstrates the importance of new indicators of party technological capacity to our understanding of party organizational development in the early twenty-first century. In the second chapter, I ask whether the national parties utilize different strategies in deciding how to allocate resources to state parties through fund transfers and through the 50-state-strategy party-building programs that both the Democratic and Republican National Committees advertised during the 2010 elections. Analyzing data collected from my 2011 state party survey and party-fund-transfer data collected from the Federal Election Commission, I find that the national parties considered a combination of state and national electoral concerns in directing assistance to the state parties through their 50-state strategies, as opposed to the strict battleground-state strategy that explains party fund transfers. In my last chapter, I examine the relationships between platforms issued by Democratic and Republican state and national parties and the strategic considerations that explain why state platforms vary in their degree of similarity to the national platform. I analyze an extensive platform dataset, using cluster analysis and document similarity measures to compare platform content across the 1952 to 2014 period. The analysis shows that, as a group, Democratic and Republican state platforms exhibit greater intra-party homogeneity and inter-party heterogeneity starting in the early 1990s, and state-national platform similarity is higher in states that are key players in presidential elections, among other factors. Together, these three studies demonstrate the significance of the state party organizations and the state-national party partnership in contemporary politics.
Resumo:
Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.
While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.
For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.
Resumo:
This key facts publication provides an interim update to the NI health & social care inequalities monitoring system (HSCIMS) regional reports which are published every other year. It presents a summary of the latest position and inequality gaps between the most deprived areas and both the least deprived areas and the NI average in addition to a regional comparison with rural areas for a range of health outcomes included within the HSCIMS series, in addition to the health survey Northern Ireland (HSNI).
Resumo:
Resumo indisponível.
Resumo:
Thesis (Master's)--University of Washington, 2016-08
Resumo:
Conventional wisdom in many agricultural systems across the world is that farmers cannot, will not, or should not pay the full costs associated with surface water delivery. Across Organisation for Economic Co-operation and Development (OECD) countries, only a handful can claim complete recovery of operation, maintenance, and capital costs; across Central and South Asia, fees are lower still, with farmers in Nepal, India, and Kazakhstan paying fractions of a U.S. penny for a cubic meter of water. In Pakistan, fees amount to roughly USD 1-2 per acre per season. However, farmers in Pakistan spend orders of magnitude more for diesel fuel to pump groundwater each season, suggesting a latent willingness to spend for water that, under the right conditions, could potentially be directed toward water-use fees for surface water supply. Although overall performance could be expected to improve with greater cost recovery, asymmetric access to water in canal irrigation systems leaves the question open as to whether those benefits would be equitably shared among all farmers in the system. We develop an agent-based model (ABM) of a small irrigation command to examine efficiency and equity outcomes across a range of different cost structures for the maintenance of the system, levels of market development, and assessed water charges. We find that, robust to a range of different cost and structural conditions, increased water charges lead to gains in both efficiency and concomitant improvements in equity as investments in canal infrastructure and system maintenance improve the conveyance of water resources further down watercourses. This suggests that, under conditions in which (1) farmers are currently spending money to pump groundwater to compensate for a failing surface water system, and (2) there is the possibility that through initial investment to provide perceptibly better water supply, genuine win-win solutions can be attained through higher water-use fees to beneficiary farmers.
Resumo:
The debriefing phase in human patient simulation is considered to be crucial for learning. To ensure good learning conditions, the use of small groups is recommended, which poses a major challenge when the student count is high. The use of large groups may provide an alternative for typical lecture-style education and contribute to a more frequently and repeated training which is considered to be important for achieving simulation competency. The purpose of the present study was to describe nursing students’ experiences obtained during the debriefing conducted in small and large groups with the use of a qualitative descriptive approach. The informants had participated in a human patient simulation situation either in large or small groups. Data was collected through the use of five focus-group interviews and analysed by content analysis. The findings showed that independent of group-size the informants experienced the learning strategies to be unfamiliar and intrusive, and in the large groups to such an extent that learning was hampered. Debriefing was perceived as offering excellent opportunities for transferable learning, and activity, predictability and preparedness were deemed essential. Small groups provided the best learning conditions in that safety and security were ensured, but were perceived as providing limited challenges to accommodate professional requirements as a nurse. Simulation competency as a prerequisite for learning was shown not to be developed isolated in conjunction with simulation, but depends on a systematic effort to build a learning community in the programme in general. The faculty needs to support the students to be conscious and accustomed to learning as a heightened experience of learning out of their comfort zone.
Resumo:
The high performance computing community has traditionally focused uniquely on the reduction of execution time, though in the last years, the optimization of energy consumption has become a main issue. A reduction of energy usage without a degradation of performance requires the adoption of energy-efficient hardware platforms accompanied by the development of energy-aware algorithms and computational kernels. The solution of linear systems is a key operation for many scientific and engineering problems. Its relevance has motivated an important amount of work, and consequently, it is possible to find high performance solvers for a wide variety of hardware platforms. In this work, we aim to develop a high performance and energy-efficient linear system solver. In particular, we develop two solvers for a low-power CPU-GPU platform, the NVIDIA Jetson TK1. These solvers implement the Gauss-Huard algorithm yielding an efficient usage of the target hardware as well as an efficient memory access. The experimental evaluation shows that the novel proposal reports important savings in both time and energy-consumption when compared with the state-of-the-art solvers of the platform.
Resumo:
Enterprise architecture (EA) is a tool that aligns organization’s business-process with application and information technology (IT) through EAmodels. This EA model allows the organization to cut off unnecessary IT expenses and determines the future and current IT requirements and boosts organizational performance. Enterprise architecture may be employed in every firm where the firm or organization requires configurations between information technology and business functions. This research investigates the role of enterprise architecture in healthcare organizations and suggests the suitable EA framework for knowledge-based medical diagnostic system for EA modeling by comparing the two most widely used EA frameworks. The results of the comparison identified that the proposed EA has a better framework for knowledge-based medical diagnostic system.
Resumo:
Aims: To investigate the use of diffusion weighted magnetic resonance imaging (DWI) and the apparent diffusion coefficient (ADC) values in the diagnosis of hemangioma. Materials and methods: The study population consisted of 72 patients with liver masses larger than 1 cm (72 focal lesions). DWI examination with a b value of 600 s/mm2 was carried out for all patients. After DWI examination, an ADC map was created and ADC values were measured for 72 liver masses and normal liver tissue (control group). The average ADC values of normal liver tissue and focal liver lesions, the “cut-off” ADC values, and the diagnostic sensitivity and specificity of the ADC map in diagnosing hemangioma, benign and malignant lesions were researched. Results: Of the 72 liver masses, 51 were benign and 21 were malignant. Benign lesions comprised 38 hemangiomas and 13 simple cysts. Malignant lesions comprised 9 hepatocellular carcinomas, and 12 metastases. The highest ADC values were measured for cysts (3.782±0.53×10-3 mm2/s) and hemangiomas (2.705±0.63×10-3 mm2/s). The average ADC value of hemangiomas was significantly higher than malignant lesions and the normal control group (p<0.001). The average ADC value of cysts were significantly higher when compared to hemangiomas and normal control group (p<0.001). To distinguish hemangiomas from malignant liver lesions, the “cut-off” ADC value of 1.800×10-3 mm2/s had a sensitivity of 97.4% and a specificity of 90.9%. To distinguish hemangioma from normal liver parenchyma the “cut-off” value of 1.858×10-3 mm2/s had a sensitivity of 97.4% and a specificity of 95.7%. To distinguish benign liver lesions from malignant liver lesions the “cut-off” value of 1.800×10-3 mm2/s had a sensitivity of 96.1% and a specificity of 90.0%. Conclusion: DWI and quantitative measurement of ADC values can be used in differential diagnosis of benign and malignant liver lesions and also in the diagnosis and differentiation of hemangiomas. When dynamic examination cannot distinguish cases with vascular metastasis and lesions from hemangioma, DWI and ADC values can be useful in the primary diagnosis and differential diagnosis. The technique does not require contrast material, so it can safely be used in patients with renal failure. Keywords:
Resumo:
Part 4: Transition Towards Product-Service Systems
Resumo:
In the context of ƒ (R) gravity theories, we show that the apparent mass of a neutron star as seen from an observer at infinity is numerically calculable but requires careful matching, first at the star’s edge, between interior and exterior solutions, none of them being totally Schwarzschild-like but presenting instead small oscillations of the curvature scalar R; and second at large radii, where the Newtonian potential is used to identify the mass of the neutron star. We find that for the same equation of state, this mass definition is always larger than its general relativistic counterpart. We exemplify this with quadratic R^2 and Hu-Sawicki-like modifications of the standard General Relativity action. Therefore, the finding of two-solar mass neutron stars basically imposes no constraint on stable ƒ (R) theories. However, star radii are in general smaller than in General Relativity, which can give an observational handle on such classes of models at the astrophysical level. Both larger masses and smaller matter radii are due to much of the apparent effective energy residing in the outer metric for scalar-tensor theories. Finally, because the ƒ (R) neutron star masses can be much larger than General Relativity counterparts, the total energy available for radiating gravitational waves could be of order several solar masses, and thus a merger of these stars constitutes an interesting wave source.
Resumo:
Several decision and control tasks involve networks of cyber-physical systems that need to be coordinated and controlled according to a fully-distributed paradigm involving only local communications without any central unit. This thesis focuses on distributed optimization and games over networks from a system theoretical perspective. In the addressed frameworks, we consider agents communicating only with neighbors and running distributed algorithms with optimization-oriented goals. The distinctive feature of this thesis is to interpret these algorithms as dynamical systems and, thus, to resort to powerful system theoretical tools for both their analysis and design. We first address the so-called consensus optimization setup. In this context, we provide an original system theoretical analysis of the well-known Gradient Tracking algorithm in the general case of nonconvex objective functions. Then, inspired by this method, we provide and study a series of extensions to improve the performance and to deal with more challenging settings like, e.g., the derivative-free framework or the online one. Subsequently, we tackle the recently emerged framework named distributed aggregative optimization. For this setup, we develop and analyze novel schemes to handle (i) online instances of the problem, (ii) ``personalized'' optimization frameworks, and (iii) feedback optimization settings. Finally, we adopt a system theoretical approach to address aggregative games over networks both in the presence or absence of linear coupling constraints among the decision variables of the players. In this context, we design and inspect novel fully-distributed algorithms, based on tracking mechanisms, that outperform state-of-the-art methods in finding the Nash equilibrium of the game.
Resumo:
In this work, integro-differential reaction-diffusion models are presented for the description of the temporal and spatial evolution of the concentrations of Abeta and tau proteins involved in Alzheimer's disease. Initially, a local model is analysed: this is obtained by coupling with an interaction term two heterodimer models, modified by adding diffusion and Holling functional terms of the second type. We then move on to the presentation of three nonlocal models, which differ according to the type of the growth (exponential, logistic or Gompertzian) considered for healthy proteins. In these models integral terms are introduced to consider the interaction between proteins that are located at different spatial points possibly far apart. For each of the models introduced, the determination of equilibrium points with their stability and a study of the clearance inequalities are carried out. In addition, since the integrals introduced imply a spatial nonlocality in the models exhibited, some general features of nonlocal models are presented. Afterwards, with the aim of developing simulations, it is decided to transfer the nonlocal models to a brain graph called connectome. Therefore, after setting out the construction of such a graph, we move on to the description of Laplacian and convolution operations on a graph. Taking advantage of all these elements, we finally move on to the translation of the continuous models described above into discrete models on the connectome. To conclude, the results of some simulations concerning the discrete models just derived are presented.