905 resultados para Socialist Systems and Transitional Economies: Performance and Prospects
Resumo:
Dans cette thèse, nous étudions les aspects comportementaux d'agents qui interagissent dans des systèmes de files d'attente à l'aide de modèles de simulation et de méthodologies expérimentales. Chaque période les clients doivent choisir un prestataire de servivce. L'objectif est d'analyser l'impact des décisions des clients et des prestataires sur la formation des files d'attente. Dans un premier cas nous considérons des clients ayant un certain degré d'aversion au risque. Sur la base de leur perception de l'attente moyenne et de la variabilité de cette attente, ils forment une estimation de la limite supérieure de l'attente chez chacun des prestataires. Chaque période, ils choisissent le prestataire pour lequel cette estimation est la plus basse. Nos résultats indiquent qu'il n'y a pas de relation monotone entre le degré d'aversion au risque et la performance globale. En effet, une population de clients ayant un degré d'aversion au risque intermédiaire encoure généralement une attente moyenne plus élevée qu'une population d'agents indifférents au risque ou très averses au risque. Ensuite, nous incorporons les décisions des prestataires en leur permettant d'ajuster leur capacité de service sur la base de leur perception de la fréquence moyenne d'arrivées. Les résultats montrent que le comportement des clients et les décisions des prestataires présentent une forte "dépendance au sentier". En outre, nous montrons que les décisions des prestataires font converger l'attente moyenne pondérée vers l'attente de référence du marché. Finalement, une expérience de laboratoire dans laquelle des sujets jouent le rôle de prestataire de service nous a permis de conclure que les délais d'installation et de démantèlement de capacité affectent de manière significative la performance et les décisions des sujets. En particulier, les décisions du prestataire, sont influencées par ses commandes en carnet, sa capacité de service actuellement disponible et les décisions d'ajustement de capacité qu'il a prises, mais pas encore implémentées. - Queuing is a fact of life that we witness daily. We all have had the experience of waiting in line for some reason and we also know that it is an annoying situation. As the adage says "time is money"; this is perhaps the best way of stating what queuing problems mean for customers. Human beings are not very tolerant, but they are even less so when having to wait in line for service. Banks, roads, post offices and restaurants are just some examples where people must wait for service. Studies of queuing phenomena have typically addressed the optimisation of performance measures (e.g. average waiting time, queue length and server utilisation rates) and the analysis of equilibrium solutions. The individual behaviour of the agents involved in queueing systems and their decision making process have received little attention. Although this work has been useful to improve the efficiency of many queueing systems, or to design new processes in social and physical systems, it has only provided us with a limited ability to explain the behaviour observed in many real queues. In this dissertation we differ from this traditional research by analysing how the agents involved in the system make decisions instead of focusing on optimising performance measures or analysing an equilibrium solution. This dissertation builds on and extends the framework proposed by van Ackere and Larsen (2004) and van Ackere et al. (2010). We focus on studying behavioural aspects in queueing systems and incorporate this still underdeveloped framework into the operations management field. In the first chapter of this thesis we provide a general introduction to the area, as well as an overview of the results. In Chapters 2 and 3, we use Cellular Automata (CA) to model service systems where captive interacting customers must decide each period which facility to join for service. They base this decision on their expectations of sojourn times. Each period, customers use new information (their most recent experience and that of their best performing neighbour) to form expectations of sojourn time at the different facilities. Customers update their expectations using an adaptive expectations process to combine their memory and their new information. We label "conservative" those customers who give more weight to their memory than to the xiv Summary new information. In contrast, when they give more weight to new information, we call them "reactive". In Chapter 2, we consider customers with different degree of risk-aversion who take into account uncertainty. They choose which facility to join based on an estimated upper-bound of the sojourn time which they compute using their perceptions of the average sojourn time and the level of uncertainty. We assume the same exogenous service capacity for all facilities, which remains constant throughout. We first analyse the collective behaviour generated by the customers' decisions. We show that the system achieves low weighted average sojourn times when the collective behaviour results in neighbourhoods of customers loyal to a facility and the customers are approximately equally split among all facilities. The lowest weighted average sojourn time is achieved when exactly the same number of customers patronises each facility, implying that they do not wish to switch facility. In this case, the system has achieved the Nash equilibrium. We show that there is a non-monotonic relationship between the degree of risk-aversion and system performance. Customers with an intermediate degree of riskaversion typically achieve higher sojourn times; in particular they rarely achieve the Nash equilibrium. Risk-neutral customers have the highest probability of achieving the Nash Equilibrium. Chapter 3 considers a service system similar to the previous one but with risk-neutral customers, and relaxes the assumption of exogenous service rates. In this sense, we model a queueing system with endogenous service rates by enabling managers to adjust the service capacity of the facilities. We assume that managers do so based on their perceptions of the arrival rates and use the same principle of adaptive expectations to model these perceptions. We consider service systems in which the managers' decisions take time to be implemented. Managers are characterised by a profile which is determined by the speed at which they update their perceptions, the speed at which they take decisions, and how coherent they are when accounting for their previous decisions still to be implemented when taking their next decision. We find that the managers' decisions exhibit a strong path-dependence: owing to the initial conditions of the model, the facilities of managers with identical profiles can evolve completely differently. In some cases the system becomes "locked-in" into a monopoly or duopoly situation. The competition between managers causes the weighted average sojourn time of the system to converge to the exogenous benchmark value which they use to estimate their desired capacity. Concerning the managers' profile, we found that the more conservative Summary xv a manager is regarding new information, the larger the market share his facility achieves. Additionally, the faster he takes decisions, the higher the probability that he achieves a monopoly position. In Chapter 4 we consider a one-server queueing system with non-captive customers. We carry out an experiment aimed at analysing the way human subjects, taking on the role of the manager, take decisions in a laboratory regarding the capacity of a service facility. We adapt the model proposed by van Ackere et al (2010). This model relaxes the assumption of a captive market and allows current customers to decide whether or not to use the facility. Additionally the facility also has potential customers who currently do not patronise it, but might consider doing so in the future. We identify three groups of subjects whose decisions cause similar behavioural patterns. These groups are labelled: gradual investors, lumpy investors, and random investor. Using an autocorrelation analysis of the subjects' decisions, we illustrate that these decisions are positively correlated to the decisions taken one period early. Subsequently we formulate a heuristic to model the decision rule considered by subjects in the laboratory. We found that this decision rule fits very well for those subjects who gradually adjust capacity, but it does not capture the behaviour of the subjects of the other two groups. In Chapter 5 we summarise the results and provide suggestions for further work. Our main contribution is the use of simulation and experimental methodologies to explain the collective behaviour generated by customers' and managers' decisions in queueing systems as well as the analysis of the individual behaviour of these agents. In this way, we differ from the typical literature related to queueing systems which focuses on optimising performance measures and the analysis of equilibrium solutions. Our work can be seen as a first step towards understanding the interaction between customer behaviour and the capacity adjustment process in queueing systems. This framework is still in its early stages and accordingly there is a large potential for further work that spans several research topics. Interesting extensions to this work include incorporating other characteristics of queueing systems which affect the customers' experience (e.g. balking, reneging and jockeying); providing customers and managers with additional information to take their decisions (e.g. service price, quality, customers' profile); analysing different decision rules and studying other characteristics which determine the profile of customers and managers.
Resumo:
The federal government mandated that all non-federal public safety license holders on the frequencies ranging from 150 to 512 megahertz reduce their operating bandwidth from 25 kilohertz to 12.5 kilohertz. Narrowband channels must update their operating licenses by January 1, 2013. Failure to do so will result in the loss of communication capabilities and fines. This issue review analyzes the impact to state agencies of the federal mandate requiring all two-way radio systems and some paging networks, including those used by public-safety agencies, to meet the new narrowband requirements by January 1, 2013. This issue review does not address the impact to local communications systems.
Resumo:
Mixture proportioning is routinely a matter of using a recipe based on a previously produced concrete, rather than adjusting the proportions based on the needs of the mixture and the locally available materials. As budgets grow tighter and increasing attention is being paid to sustainability metrics, greater attention is beginning to be focused on making mixtures that are more efficient in their usage of materials yet do not compromise engineering performance. Therefore, a performance-based mixture proportioning method is needed to provide the desired concrete properties for a given project specification. The proposed method should be user friendly, easy to apply in practice, and flexible in terms of allowing a wide range of material selection. The objective of this study is to further develop an innovative performance-based mixture proportioning method by analyzing the relationships between the selected mix characteristics and their corresponding effects on tested properties. The proposed method will provide step-by-step instructions to guide the selection of required aggregate and paste systems based on the performance requirements. Although the provided guidance in this report is primarily for concrete pavements, the same approach can be applied to other concrete applications as well.
Resumo:
The performance of a pavement depends on the quality of its subgrade and subbase layers; these foundational layers play a key role in mitigating the effects of climate and the stresses generated by traffic. Therefore, building a stable subgrade and a properly drained subbase is vital for constructing an effective and long lasting pavement system. This manual has been developed to help Iowa highway engineers improve the design, construction, and testing of a pavement system’s subgrade and subbase layers, thereby extending pavement life. The manual synthesizes current and previous research conducted in Iowa and other states into a practical geotechnical design guide [proposed as Chapter 6 of the Statewide Urban Design and Specifications (SUDAS) Design Manual] and construction specifications (proposed as Section 2010 of the SUDAS Standard Specifications) for subgrades and subbases. Topics covered include the important characteristics of Iowa soils, the key parameters and field properties of optimum foundations, embankment construction, geotechnical treatments, drainage systems, and field testing tools, among others.
Resumo:
In this paper, we investigate the average andoutage performance of spatial multiplexing multiple-input multiple-output (MIMO) systems with channel state information at both sides of the link. Such systems result, for example, from exploiting the channel eigenmodes in multiantenna systems. Dueto the complexity of obtaining the exact expression for the average bit error rate (BER) and the outage probability, we deriveapproximations in the high signal-to-noise ratio (SNR) regime assuming an uncorrelated Rayleigh flat-fading channel. Moreexactly, capitalizing on previous work by Wang and Giannakis, the average BER and outage probability versus SNR curves ofspatial multiplexing MIMO systems are characterized in terms of two key parameters: the array gain and the diversity gain. Finally, these results are applied to analyze the performance of a variety of linear MIMO transceiver designs available in the literature.
Resumo:
This publication deals with various aspects of European Union enlargement effects faced by the companies from EU15 and especially from Finland when doing business in the ten transitional economies which joined European Union in 2004 and 2007
Resumo:
VALOSADE (Value Added Logistics in Supply and Demand Chains) is the research project of Anita Lukka's VALORE (Value Added Logistics Research) research team inLappeenranta University of Technology. VALOSADE is included in ELO (Ebusiness logistics) technology program of Tekes (Finnish Technology Agency). SMILE (SME-sector, Internet applications and Logistical Efficiency) is one of four subprojects of VALOSADE. SMILE research focuses on case network that is composed of small and medium sized mechanical maintenance service providers and global wood processing customers. Basic principle of SMILE study is communication and ebusiness insupply and demand network. This first phase of research concentrates on creating backgrounds for SMILE study and for ebusiness solutions of maintenance case network. The focus is on general trends of ebusiness in supply chains and networksof different industries; total ebusiness system architecture of company networks; ebusiness strategy of company network; information value chain; different factors, which influence on ebusiness solution of company network; and the correlation between ebusiness and competitive advantage. Literature, interviews and benchmarking were used as research methods in this qualitative case study. Networks and end-to-end supply chains are the organizational structures, which can add value for end customer. Information is one of the key factors in these decentralized structures. Because of decentralization of business, information is produced and used in different companies and in different information systems. Information refinement services are needed to manage information flows in company networksbetween different systems. Furthermore, some new solutions like network information systems are utilised in optimising network performance and in standardizingnetwork common processes. Some cases have however indicated, that utilization of ebusiness in decentralized business model is not always a necessity, but value-add of ICT must be defined case-specifically. In the theory part of report, different ebusiness and architecture models are introduced. These models are compared to empirical case data in research results. The biggest difference between theory and empirical data is that models are mainly developed for large-scale companies - not for SMEs. This is due to that implemented network ebusiness solutions are mainly large company centered. Genuine SME network centred ebusiness models are quite rare, and the study in that area has been few in number. Business relationships between customer and their SME suppliers are nowadays concentrated more on collaborative tactical and strategic initiatives besides transaction based operational initiatives. However, ebusiness systems are further mainly based on exchange of operational transactional data. Collaborative ebusiness solutions are in planning or pilot phase in most case companies. Furthermore, many ebusiness solutions are nowadays between two participants, but network and end-to-end supply chain transparency and information systems are quite rare. Transaction volumes, data formats, the types of exchanged information, information criticality,type and duration of business relationship, internal information systems of partners, processes and operation models (e.g. different ordering models) differ among network companies, and furthermore companies are at different stages on networking and ebusiness readiness. Because of former factors, different customer-supplier combinations in network must utilise totally different ebusiness architectures, technologies, systems and standards.
A priori parameterisation of the CERES soil-crop models and tests against several European data sets
Resumo:
Mechanistic soil-crop models have become indispensable tools to investigate the effect of management practices on the productivity or environmental impacts of arable crops. Ideally these models may claim to be universally applicable because they simulate the major processes governing the fate of inputs such as fertiliser nitrogen or pesticides. However, because they deal with complex systems and uncertain phenomena, site-specific calibration is usually a prerequisite to ensure their predictions are realistic. This statement implies that some experimental knowledge on the system to be simulated should be available prior to any modelling attempt, and raises a tremendous limitation to practical applications of models. Because the demand for more general simulation results is high, modellers have nevertheless taken the bold step of extrapolating a model tested within a limited sample of real conditions to a much larger domain. While methodological questions are often disregarded in this extrapolation process, they are specifically addressed in this paper, and in particular the issue of models a priori parameterisation. We thus implemented and tested a standard procedure to parameterize the soil components of a modified version of the CERES models. The procedure converts routinely-available soil properties into functional characteristics by means of pedo-transfer functions. The resulting predictions of soil water and nitrogen dynamics, as well as crop biomass, nitrogen content and leaf area index were compared to observations from trials conducted in five locations across Europe (southern Italy, northern Spain, northern France and northern Germany). In three cases, the model’s performance was judged acceptable when compared to experimental errors on the measurements, based on a test of the model’s root mean squared error (RMSE). Significant deviations between observations and model outputs were however noted in all sites, and could be ascribed to various model routines. In decreasing importance, these were: water balance, the turnover of soil organic matter, and crop N uptake. A better match to field observations could therefore be achieved by visually adjusting related parameters, such as field-capacity water content or the size of soil microbial biomass. As a result, model predictions fell within the measurement errors in all sites for most variables, and the model’s RMSE was within the range of published values for similar tests. We conclude that the proposed a priori method yields acceptable simulations with only a 50% probability, a figure which may be greatly increased through a posteriori calibration. Modellers should thus exercise caution when extrapolating their models to a large sample of pedo-climatic conditions for which they have only limited information.
Resumo:
In two previous papers [J. Differential Equations, 228 (2006), pp. 530 579; Discrete Contin. Dyn. Syst. Ser. B, 6 (2006), pp. 1261 1300] we have developed fast algorithms for the computations of invariant tori in quasi‐periodic systems and developed theorems that assess their accuracy. In this paper, we study the results of implementing these algorithms and study their performance in actual implementations. More importantly, we note that, due to the speed of the algorithms and the theoretical developments about their reliability, we can compute with confidence invariant objects close to the breakdown of their hyperbolicity properties. This allows us to identify a mechanism of loss of hyperbolicity and measure some of its quantitative regularities. We find that some systems lose hyperbolicity because the stable and unstable bundles approach each other but the Lyapunov multipliers remain away from 1. We find empirically that, close to the breakdown, the distances between the invariant bundles and the Lyapunov multipliers which are natural measures of hyperbolicity depend on the parameters, with power laws with universal exponents. We also observe that, even if the rigorous justifications in [J. Differential Equations, 228 (2006), pp. 530-579] are developed only for hyperbolic tori, the algorithms work also for elliptic tori in Hamiltonian systems. We can continue these tori and also compute some bifurcations at resonance which may lead to the existence of hyperbolic tori with nonorientable bundles. We compute manifolds tangent to nonorientable bundles.
Resumo:
This book comprises two volumes and builds on the findings of the DISMEVAL project (Developing and validating DISease Management EVALuation methods for European health care systems), funded under the European Union's (EU) Seventh Framework Programme (FP7) (Agreement no. 223277). DISMEVAL was a three-year European collaborative project conducted between 2009 and 2011. It contributed to developing new research methods and generating the evidence base to inform decision-making in the field of chronic disease management evaluation (www.dismeval.eu). In this book, we report on the findings of the project's first phase, capturing the diverse range of contexts in which new approaches to chronic care are being implemented and evaluating the outcomes of these initiatives using an explicit comparative approach and a unified assessment framework. In this first volume, we describe the range of approaches to chronic care adopted in 12 European countries. By reflecting on the facilitators and barriers to implementation, we aim to provide policy-makers and practitioners with a portfolio of options to advance chronic care approaches in a given policy context.
Resumo:
In modem hitec industry Advanced Planning and Scheduling (APS) systems provide the basis for e-business solutions towards the suppliers and the customers. One objective of this thesis was to clarify the modem supply chain management with the APS systems and especially concentrate on the area of Collaborative Planning. In order Advanced Planning and Scheduling systems to be complete and usable, user interfaces are needed. Current Visual Basic user interfaces have faced many complaints and arguments from the users as well as from the development team. This thesis is trying to analyze the reasons and causes for the encountered problems and also provide ways to overcome them. The decision has been made to build the new user interfaces to be Web-enabled. Therefore another objective of this thesis was to research and find suitable technologies for building the Web-based user interfaces for Advanced Planning and Scheduling Systems in Nokia Demand/Supply Planning business area. Comparison between the most suitable technologies is made. Usability issues of Web-enabled user interfaces are also covered. The empirical part of the thesis includes design and implementation of a Web-based user interface with the chosen technology for a particular APS module that enables Collaborative Planning with suppliers.
Resumo:
The phyllochron is defined as the time required for the appearance of successive leaves on a plant; this characterises plant growth, development and adaptation to the environment. To check the growth and adaptation in cultivars of strawberry grown intercropped with fig trees, it was estimated the phyllochron in these production systems and in the monocrop. The experiment was conducted in greenhouses at the University of Passo Fundo (28º15'41'' S, 52º24'45'' W and 709 m) from June 8th to September 4th, 2009; this comprised the period of transplant until the 2nd flowering. The cultivars Aromas, Camino Real, Albion, Camarosa and Ventana, which seedlings were originated from the Agrícola LLahuen Nursery in Chile, as well as Festival, Camino Real and Earlibrite, originated from the Viansa S.A. Nursery in Argentina, were grown in white polyethylene bags filled with commercial substrate (Tecnomax®) and evaluated. The treatments were arranged in a randomised block design and four replicates were performed. A linear regression was realized between the leaf number (LN) in the main crown and the accumulated thermal time (ATT). The phyllochron (degree-day leaf-1) was estimated as the inverse of the angular coefficient of the linear regression. The data were submitted to ANOVA, and when significance was observed, the means were compared using the Tukey test (p < 0.05). The mean and standard deviation of phyllochrons of strawberry cultivars intercropped with fig trees varied from 149.35ºC day leaf-1 ± 31.29 in the Albion cultivar to 86.34ºC day leaf-1 ± 34.74 in the Ventana cultivar. Significant differences were observed among cultivars produced in a soilless environment with higher values recorded for Albion (199.96ºC day leaf-1 ± 29.7), which required more degree-days to produce a leaf, while cv. Ventana (85.76ºC day leaf-1 ± 11.51) exhibited a lower phyllochron mean value. Based on these results, Albion requires more degree-days to issue a leaf as compared to cv. Ventana. It was conclude that strawberry cultivars can be grown intercropped with fig trees (cv. Roxo de Valinhos).
Resumo:
OBJECTIVES: Immunohistochemistry (IHC) has become a promising method for pre-screening ALK-rearrangements in non-small cell lung carcinomas (NSCLC). Various ALK antibodies, detection systems and automated immunostainers are available. We therefore aimed to compare the performance of the monoclonal 5A4 (Novocastra, Leica) and D5F3 (Cell Signaling, Ventana) antibodies using two different immunostainers. Additionally we analyzed the accuracy of prospective ALK IHC-testing in routine diagnostics. MATERIALS AND METHODS: Seventy-two NSCLC with available ALK FISH results and enriched for FISH-positive carcinomas were retrospectively analyzed. IHC was performed on BenchMarkXT (Ventana) using 5A4 and D5F3, respectively, and additionally with 5A4 on Bond-MAX (Leica). Data from our routine diagnostics on prospective ALK-testing with parallel IHC, using 5A4, and FISH were available from 303 NSCLC. RESULTS: All three IHC protocols showed congruent results. Only 1/25 FISH-positive NSCLC (4%) was false negative by IHC. For all three IHC protocols the sensitivity, specificity, positive (PPV) and negative predictive values (NPV) compared to FISH were 96%, 100%, 100% and 97.8%, respectively. In the prospective cohort 3/32 FISH-positive (9.4%) and 2/271 FISH-negative (0.7%) NSCLC were false negative and false positive by IHC, respectively. In routine diagnostics the sensitivity, specificity, PPV and NPV of IHC compared to FISH were 90.6%, 99.3%, 93.5% and 98.9%, respectively. CONCLUSIONS: 5A4 and D5F3 are equally well suited for detecting ALK-rearranged NSCLC. BenchMark and BOND-MAX immunostainers can be used for IHC with 5A4. True discrepancies between IHC and FISH results do exist and need to be addressed when implementing IHC in an ALK-testing algorithm.
Resumo:
This thesis consists of four articles and an introductory section. The main research questions in all the articles are about proportionality and party success in Europe, at European, national or district levels. Proportionality in this thesis denotes the proximity of seat shares parties receive compared to their respective vote shares, after the electoral system’s allocation process. This proportionality can be measured through numerous indices that illustrate either the overall proportionality of an electoral system or a particular election. The correspondence of a single party’s seat shares to its vote shares can also be measured. The overall proportionality is essential in three of the articles (1, 2 and 4), where the system’s performance is studied by means of plots. In article 3, minority party success is measured by advantage-ratios that reveal single party’s winnings or losses in the votes to seat allocation process. The first article asks how proportional are the European parliamentary (EP) electoral systems, how do they compare with results gained from earlier studies and how do the EP electoral systems treat different sized parties. The reasons for different outcomes are looked for in explanations given by traditional electoral studies i.e. electoral system variables. The countries studied (EU15) apply electoral systems that vary in many important aspects, even though a certain amount of uniformity has been aspired to for decades. Since the electoral systems of the EP elections closely resemble the national elections, the same kinds of profiles emerge as in the national elections. The electoral systems indeed treat the parties differentially and six different profile types can be found. The counting method seems to somewhat determine the profile group, but the strongest variables determining the shape of a countries’ profile appears to be the average district magnitude and number of seats allocated to each country. The second article also focuses on overall proportionality performance of an electoral system, but here the focus is on the impact of electoral system changes. I have developed a new method of visualizing some previously used indices and some new indices for this purpose. The aim is to draw a comparable picture of these electoral systems’ changes and their effects. The cases, which illustrate this method, are four elections systems, where a change has occurred in one of the system variables, while the rest remained unchanged. The studied cases include the French, Greek and British European parliamentary systems and the Swedish national parliamentary system. The changed variables are electoral type (plurality changed to PR in the UK), magnitude (France splitting the nationwide district into eight smaller districts), legal threshold (Greece introducing a three percent threshold) and counting method (d’Hondt was changed to modified Sainte-Laguë in Sweden). The radar plots from elections after and before the changes are drawn for all country cases. When quantifying the change, the change in the plots area that is created has also been calculated. Using these radar plots we can observe that the change in electoral system type, magnitude, and also to some extent legal threshold had an effect on overall proportionality and accessibility for small parties, while the change between the two highest averages counting method had none. The third article studies the success minority parties have had in nine electoral systems in European heterogeneous countries. This article aims to add more motivation as to why we should care how different sized parties are treated by the electoral systems. Since many of the parties that aspire to represent minorities in European countries are small, the possibilities for small parties are highlighted. The theory of consociational (or power-sharing) democracy suggests that, in heterogeneous societies, a proportional electoral system will provide the fairest treatment of minority parties. The OSCE Lund Recommendations propose a number of electoral system features, which would improve minority representation. In this article some party variables, namely the unity of the minority parties and the geographical concentration of the minorities were included among possible explanations. The conclusions are that the central points affecting minority success were indeed these non-electoral system variables rather than the electoral system itself. Moreover, the size of the party was a major factor governing success in all the systems investigated; large parties benefited in all the studied electoral systems. In the fourth article the proportionality profiles are again applied, but this time to district level results in Finnish parliamentary elections. The level of proportionality distortion is also studied by way of indices. The average magnitudes during the studied periodrange from 7.5 to 26.2 in the Finnish electoral districts and this opens up unequal opportunities for parties in different districts and affects the shape of the profiles. The intra-country case allows the focus to be placed on the effect of district magnitude, since all other electoral systems are kept constant in an intra-country study. The time span in the study is from 1962 to 2007, i.e. the time that the districts have largely been the same geographically. The plots and indices tell the same story, district magnitude and electoral alliances matter. The district magnitude is connected to the overall proportionality of the electoral districts according to both indices, and the profiles are, as expected, also closer to perfect proportionality in large districts. Alliances have helped some small parties to gain a much higher seat share than their respective vote share and these successes affect some of the profiles. The profiles also show a consistent pattern of benefits for the small parties who ally with the larger parties.
Resumo:
Bloodstream infections and sepsis are a major cause of morbidity and mortality. The successful outcome of patients suffering from bacteremia depends on a rapid identification of the infectious agent to guide optimal antibiotic treatment. The analysis of Gram stains from positive blood culture can be rapidly conducted and already significantly impact the antibiotic regimen. However, the accurate identification of the infectious agent is still required to establish the optimal targeted treatment. We present here a simple and fast bacterial pellet preparation from a positive blood culture that can be used as a sample for several essential downstream applications such as identification by MALDI-TOF MS, antibiotic susceptibility testing (AST) by disc diffusion assay or automated AST systems and by automated PCR-based diagnostic testing. The performance of these different identification and AST systems applied directly on the blood culture bacterial pellets is very similar to the performance normally obtained from isolated colonies grown on agar plates. Compared to conventional approaches, the rapid acquisition of a bacterial pellet significantly reduces the time to report both identification and AST. Thus, following blood culture positivity, identification by MALDI-TOF can be reported within less than 1 hr whereas results of AST by automated AST systems or disc diffusion assays within 8 to 18 hr, respectively. Similarly, the results of a rapid PCR-based assay can be communicated to the clinicians less than 2 hr following the report of a bacteremia. Together, these results demonstrate that the rapid preparation of a blood culture bacterial pellet has a significant impact on the identification and AST turnaround time and thus on the successful outcome of patients suffering from bloodstream infections.