916 resultados para over-generalization and under-generalization problems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective To investigate the extent of heat load problems, caused by the combination of excessive temperature and humidity, in Holstein-Friesian cows in Australia. Also, to outline how milk production losses and consequent costs from this can be estimated and minimised. Procedures Long-term meteorological data for Australia were analysed to determine the distribution of hot conditions over space and time. Fifteen dairy production regions were identified for higher-resolution data analysis. Both the raw meteorological data and their integration into a temperature-humidity thermal index were compiled onto a computer program. This mapping software displays the distribution of climatic patterns, both Australia-wide and within the selected dairying regions. Graphical displays of the variation in historical records for 200 locations in the 15 dairying regions are also available. As a separate study, production data from research stations, on-farm trials and milk factory records were statistically analysed and correlated with the climatic indices, to estimate production losses due to hot conditions. Results Both milk yields and milk constituents declined with increases in the temperature-humidity index. The onset and rate of this decline are dependent on a number of factors, including location, level of production, adaptation, and management regime. These results have been integrated into a farm-level economic analysis for managers of dairy properties. Conclusion By considering the historical patterns of hot conditions over time and space, along with expected production losses, managers of dairy farms can now conduct an economic evaluation of investment strategies to alleviate heat loads. These strategies include the provision of sprinklers, shade structures, or combinations of these.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Measuring Job Openings: Evidence from Swedish Plant Level Data. In modern macroeconomic models “job openings'' are a key component. Thus, when taking these models to the data we need an empirical counterpart to the theoretical concept of job openings. To achieve this, the literature relies on job vacancies measured either in survey or register data. Insofar as this concept captures the concept of job openings well we should see a tight relationship between vacancies and subsequent hires on the micro level. To investigate this, I analyze a new data set of Swedish hires and job vacancies on the plant level covering the period 2001-2012. I find that vacancies contain little power in predicting hires over and above (i) whether the number of vacancies is positive and (ii) plant size. Building on this, I propose an alternative measure of job openings in the economy. This measure (i) better predicts hiring at the plant level and (ii) provides a better fitting aggregate matching function vis-à-vis the traditional vacancy measure. Firm Level Evidence from Two Vacancy Measures. Using firm level survey and register data for both Sweden and Denmark we show systematic mis-measurement in both vacancy measures. While the register-based measure on the aggregate constitutes a quarter of the survey-based measure, the latter is not a super-set of the former. To obtain the full set of unique vacancies in these two databases, the number of survey vacancies should be multiplied by approximately 1.2. Importantly, this adjustment factor varies over time and across firm characteristics. Our findings have implications for both the search-matching literature and policy analysis based on vacancy measures: observed changes in vacancies can be an outcome of changes in mis-measurement, and are not necessarily changes in the actual number of vacancies. Swedish Unemployment Dynamics. We study the contribution of different labor market flows to business cycle variations in unemployment in the context of a dual labor market. To this end, we develop a decomposition method that allows for a distinction between permanent and temporary employment. We also allow for slow convergence to steady state which is characteristic of European labor markets. We apply the method to a new Swedish data set covering the period 1987-2012 and show that the relative contributions of inflows and outflows to/from unemployment are roughly 60/30. The remaining 10\% are due to flows not involving unemployment. Even though temporary contracts only cover 9-11\% of the working age population, variations in flows involving temporary contracts account for 44\% of the variation in unemployment. We also show that the importance of flows involving temporary contracts is likely to be understated if one does not account for non-steady state dynamics. The New Keynesian Transmission Mechanism: A Heterogeneous-Agent Perspective. We argue that a 2-agent version of the standard New Keynesian model---where a ``worker'' receives only labor income and a “capitalist'' only profit income---offers insights about how income inequality affects the monetary transmission mechanism. Under rigid prices, monetary policy affects the distribution of consumption, but it has no effect on output as workers choose not to change their hours worked in response to wage movements. In the corresponding representative-agent model, in contrast, hours do rise after a monetary policy loosening due to a wealth effect on labor supply: profits fall, thus reducing the representative worker's income. If wages are rigid too, however, the monetary transmission mechanism is active and resembles that in the corresponding representative-agent model. Here, workers are not on their labor supply curve and hence respond passively to demand, and profits are procyclical.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Adrenomedullin (AM) has two specific receptors formed by the calcitonin-receptor-like receptor (CL) and receptor activity-modifying protein (RAMP) 2 or 3. These are known as AM1 and AM2 receptors, respectively. In addition, AM has appreciable affinity for the CGRP1 receptor, composed of CL and RAMP1. The AM1 receptor has a high degree of selectivity for AM over CGRP and other peptides, and AM 22-52 is an effective antagonist at this receptor. By contrast, the AM2 receptor shows less specificity for AM, having appreciable affinity for βCGRP. Here, CGRP8-37 is either equipotent or more effective as an antagonist than AM22-52, depending on the species from which the receptor components are derived. Thus, under the appropriate circumstances it seems that βCGRP might be able to activate both CGRP 1 and AM2 receptors and AM could activate both AM 1 and AM2 receptors as well as CGRP1 receptors. Current peptide antagonists are not sufficiently selective to discriminate between these three receptors. The CGRP-selectivity of RAMP1 and RAMP3 may be conferred by a putative disulfide bond from the N-terminus to the middle of the extracellular domain of these molecules. This is not present in RAMP2. Copyright © 2004 Humana Press Inc. All rights of any nature whatsoever reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Attractor properties of a popular discrete-time neural network model are illustrated through numerical simulations. The most complex dynamics is found to occur within particular ranges of parameters controlling the symmetry and magnitude of the weight matrix. A small network model is observed to produce fixed points, limit cycles, mode-locking, the Ruelle-Takens route to chaos, and the period-doubling route to chaos. Training algorithms for tuning this dynamical behaviour are discussed. Training can be an easy or difficult task, depending whether the problem requires the use of temporal information distributed over long time intervals. Such problems require training algorithms which can handle hidden nodes. The most prominent of these algorithms, back propagation through time, solves the temporal credit assignment problem in a way which can work only if the relevant information is distributed locally in time. The Moving Targets algorithm works for the more general case, but is computationally intensive, and prone to local minima.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quaternary ammonium exchanged laponites (Quat-laponites) show selectivity in the adsorption of phenols and chlorinated phenols. Strong adsorbate-adsorbent interactions are indicated by adsorption isotherms. Adsorption of phenols and chlorinated phenols by Quat-smectites is greater than that by the Bi Quat-Smectites prepared in this study. It is thought that the quaternary ammonium exchanged smectite components of the Bi Quat-smectites interact with each other (adsorbent-adsorbent interactions) reducing the number of sites available for adsorbate-adsorbent interactions. Solidification/stabilisation studies of 2-chlorophenol show that a blend of ground granulated blast furnace slag and ordinary Portland cement attenuates 2-chlorophenol more effectively than ordinary Portland cement alone. Tetramethyl ammonium- (TMA-) and tetramethyl phosphonium- (TMP-) montmorillonites were exposed to solutions of phenol or chlorinated phenols. TMP- montmorillonite was the better adsorbent and preferentially adsorbed 4-chlorophenol over phenol. Hydration of the interlayer cations occurs to a greater extent in the TMA-montmorillonite than the TMP-montmorillonite restricting interlayer adsorption. Contrary to that observed for phenols and chlorinated phenols, the Quat-smectites were ineffective as adsorbents for triphenyltin hydroxide and bis(tributyltin) oxide at room temperature. Under microwave conditions, only bis(tributyltin) oxide was adsorbed by the quaternary ammonium exchanged smectites. Bis(tributyltin) oxide was adsorbed from ethanol on the surface of the smectite clays at room temperature and under microwave conditions. The adsorbate-adsorbent interactions were weak. Adsorption is accompanied by conversion of bis(tributyltin) oxide to a different tin(IV) species and the release of sodium cations from the montmorillonite interlayer region. Attempts to introduce conditions suitable for charge transfer interactions between synthesised quaternary ammonium compounds and 2,4,6-trichlorophenol are documented. Transition metal complex exchanged clays adsorb 2,4,6-trichlorophenol and phenol. Strong adsorbate-adsorbent interactions (Type I isotherms) occur when the adsorbate is 2,4,6-trichlorophenol and when the adsorbent is [Fe(bipy)3]2+ exchanged montmorillonite or [Co(bipy)3]3+ exchanged montmorillonite. The 2,2'-bipyridyl ligands of the adsorbents are electron rich and the 2,4,6-trichlorophenol is electron deficient. This may have enhanced adsorbate-adsorbent interactions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large monitoring networks are becoming increasingly common and can generate large datasets from thousands to millions of observations in size, often with high temporal resolution. Processing large datasets using traditional geostatistical methods is prohibitively slow and in real world applications different types of sensor can be found across a monitoring network. Heterogeneities in the error characteristics of different sensors, both in terms of distribution and magnitude, presents problems for generating coherent maps. An assumption in traditional geostatistics is that observations are made directly of the underlying process being studied and that the observations are contaminated with Gaussian errors. Under this assumption, sub–optimal predictions will be obtained if the error characteristics of the sensor are effectively non–Gaussian. One method, model based geostatistics, assumes that a Gaussian process prior is imposed over the (latent) process being studied and that the sensor model forms part of the likelihood term. One problem with this type of approach is that the corresponding posterior distribution will be non–Gaussian and computationally demanding as Monte Carlo methods have to be used. An extension of a sequential, approximate Bayesian inference method enables observations with arbitrary likelihoods to be treated, in a projected process kriging framework which is less computationally intensive. The approach is illustrated using a simulated dataset with a range of sensor models and error characteristics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Paediatric intensive care is an expanding specialty that has been shown to improve the quality of care provided to critically ill children. An important aspect of the management of critically ill children includes the provision of effective sedation to reduce stress and anxiety during their stay in intensive care. However, to achieve effective and safe sedation in these children, is recognised as a challenge that is not without risk. Often children receive too much or too little sedation resulting in over sedation or under sedation respectively. These problems have arisen owing to a lack of information regarding altered pharmacokinetics and pharmacodynamics of medicines administered to critically ill children. In addition there are few validated sedation scoring systems in practice with which to monitor level of sedation and titrate medication appropriately. This study consisted of two stages. Stage 1 investigated the reproducibility and practicality of two observational sedation assessment scales for use in critically ill children. The two scales were different in design, the first being simple in design requiring a single assessment of the patient. The second was more complex in design requiring assessment of five patient parameters to obtain an overall sedation score. Both scales were found to achieve good reproducibility (kappa values 0.50 and 0.62 respectively). Practicality of each sedation scale was undertaken by obtaining nursing staff opinion about both scales using questionnaire and interview technique. It was established that nursing staff preferred the second, more complex sedation scale mainly because it was perceived to give a more accurate assessment of level of sedation and anxiety rather than merely level of sedation. Stage 2 investigated the pharmacokinetics and pharmacodynamics of midazolam in critically ill children. 52 children, aged between 0 and 18 years were recruited to the study and 303 blood samples taken to analyse midazolam and its metabolites, I-hydroxyrnidazolam (I-OR) and 4-hydroxymidazolam (4-0H). Analysis of plasma was undertaken using high performance liquid chromatography. A significant correlation was found between midazolam plasma concentration and sedative effect (r=0.598, p=O.OI). It was found that a midazolam plasma concentration of 223ng/ml (±31.9) achieved a satisfactory level of sedation. Only a poor correlation was found between dose of midazolam and plasma concentration of midazolam. Similarly only a poor correlation was found between sedative effect and dose of midazolam. Clearance of midazolam was found to be 6.3mllkglmin (±0.36), which is lower than that reported in healthy children (9.Il-13.3mllkg/min). Age related differences in midazolam clearance were observed in the study. Neonates produced the lowest clearance values (l.63mllkg/min), compared to children aged 1 to 12 months (8.52mllkg/min) who achieved the highest clearance values. Clearance was found to decrease after the age of 12 months to values of 5.34mllkglmin in children aged 7 years and above. Patients with renal (n=5) and liver impairment (n~4) were found to have reduced midazolam clearance (1.37 and 0.74ml/kg/min respectively). Plasma concentrations of I-OH and 4-0H ranged from 0-5 1 89nglml and 0-27 Inglml respectively. All children were found to be capable of producing both metabolites irrespective of age, although no trend was established between age and extent of production of either metabolite. Disease state was found to affect production of l-OH. Patients with renal impairment (n=5) produced the lowest I-OH midazolam plasma ratio (0.059) compared to patients with head injury (0.858). Patients with severe liver impairment were found to be capable of manufacturing both metabolites despite having a severely damaged liver.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is undertaken in the attempt to understand the processes at work at the cutting edge of the twist drill. Extensive drill life testing performed by the University has reinforced a survey of previously published information. This work demonstrated that there are two specific aspects of drilling which have not previously been explained comprehensively. The first concerns the interrelating of process data between differing drilling situations, There is no method currently available which allows the cutting geometry of drilling to be defined numerically so that such comparisons, where made, are purely subjective. Section one examines this problem by taking as an example a 4.5mm drill suitable for use with aluminium. This drill is examined using a prototype solid modelling program to explore how the required numerical information may be generated. The second aspect is the analysis of drill stiffness. What aspects of drill stiffness provide the very great difference in performance between short flute length, medium flute length and long flute length drills? These differences exist between drills of identical point geometry and the practical superiority of short drills has been known to shop floor drilling operatives since drilling was first introduced. This problem has been dismissed repeatedly as over complicated but section two provides a first approximation and shows that at least for smaller drills of 4. 5mm the effects are highly significant. Once the cutting action of the twist drill is defined geometrically there is a huge body of machinability data that becomes applicable to the drilling process. Work remains to interpret the very high inclination angles of the drill cutting process in terms of cutting forces and tool wear but aspects of drill design may already be looked at in new ways with the prospect of a more analytical approach rather than the present mix of experience and trial and error. Other problems are specific to the twist drill, such as the behaviour of the chips in the flute. It is now possible to predict the initial direction of chip flow leaving the drill cutting edge. For the future the parameters of further chip behaviour may also be explored within this geometric model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This exploratory study is concerned with the integrated appraisal of multi-storey dwelling blocks which incorporate large concrete panel systems (LPS). The first step was to look at U.K. multi-storey dwelling stock in general, and under the management of Birmingham City Council in particular. The information has been taken from the databases of three departments in the City of Birmingham, and rearranged in a new database using a suite of PC software called `PROXIMA' for clarity and analysis. One hundred of their stock were built large concrete panel system. Thirteen LPS blocks were chosen for the purpose of this study as case-studies depending mainly on the height and age factors of the block. A new integrated appraisal technique has been created for the LPS dwelling blocks, which takes into account the most physical and social factors affecting the condition and acceptability of these blocks. This appraisal technique is built up in a hierarchical form moving from the general approach to particular elements (a tree model). It comprises two main approaches; physical and social. In the physical approach, the building is viewed as a series of manageable elements and sub-elements to cover every single physical or environmental factor of the block, in which the condition of the block is analysed. A quality score system has been developed which depends mainly on the qualitative and quantitative conditions of each category in the appraisal tree model, and leads to physical ranking order of the study blocks. In the social appraisal approach, the residents' satisfaction and attitude toward their multi-storey dwelling block was analysed in relation to: a. biographical and housing related characteristics; and b. social, physical and environmental factors associated with this sort of dwelling, block and estate in general.The random sample consisted of 268 residents living in the 13 case study blocks. Data collected was analysed using frequency counts, percentages, means, standard deviations, Kendall's tue, r-correlation coefficients, t-test, analysis of variance (ANOVA) and multiple regression analysis. The analysis showed a marginally positive satisfaction and attitude towards living in the block. The five most significant factors associated with the residents' satisfaction and attitude in descending order were: the estate, in general; the service categories in the block, including heating system and lift services; vandalism; the neighbours; and the security system of the block. An important attribute of this method, is that it is relatively inexpensive to implement, especially when compared to alternatives adopted by some local authorities and the BRE. It is designed to save time, money and effort, to aid decision making, and to provide ranked priority to the multi-storey dwelling stock, in addition to many other advantages. A series of solution options to the problems of the block was sought for selection and testing before implementation. The traditional solutions have usually resulted in either demolition or costly physical maintenance and social improvement of the blocks. However, a new solution has now emerged, which is particularly suited to structurally sound units. The solution of `re-cycling' might incorporate the reuse of an entire block or part of it, by removing panels, slabs and so forth from the upper floors in order to reconstruct them as low-rise accommodations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We experimentally demonstrate 38-Gbit/s offset-16QAM OFDM over 840km without guard interval, and numerically show that 112-Gbit/s PDM offset-QPSK OFDM achieves 23% increase in net capacity over conventional OFDM under the same transmission reach. © 2014 Optical Society of America.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In nonlinear and stochastic control problems, learning an efficient feed-forward controller is not amenable to conventional neurocontrol methods. For these approaches, estimating and then incorporating uncertainty in the controller and feed-forward models can produce more robust control results. Here, we introduce a novel inversion-based neurocontroller for solving control problems involving uncertain nonlinear systems which could also compensate for multi-valued systems. The approach uses recent developments in neural networks, especially in the context of modelling statistical distributions, which are applied to forward and inverse plant models. Provided that certain conditions are met, an estimate of the intrinsic uncertainty for the outputs of neural networks can be obtained using the statistical properties of networks. More generally, multicomponent distributions can be modelled by the mixture density network. Based on importance sampling from these distributions a novel robust inverse control approach is obtained. This importance sampling provides a structured and principled approach to constrain the complexity of the search space for the ideal control law. The developed methodology circumvents the dynamic programming problem by using the predicted neural network uncertainty to localise the possible control solutions to consider. A nonlinear multi-variable system with different delays between the input-output pairs is used to demonstrate the successful application of the developed control algorithm. The proposed method is suitable for redundant control systems and allows us to model strongly non-Gaussian distributions of control signal as well as processes with hysteresis. © 2004 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

CO vibrational spectra over catalytic nanoparticles under high coverages/pressures are discussed from a DFT perspective. Hybrid B3LYP and PBE DFT calculations of CO chemisorbed over Pd4 and Pd13 nanoclusters, and a 1.1 nm Pd38 nanoparticle, have been performed in order to simulate the corresponding coverage dependent infrared (IR) absorption spectra, and hence provide a quantitative foundation for the interpretation of experimental IR spectra of CO over Pd nanocatalysts. B3LYP simulated IR intensities are used to quantify site occupation numbers through comparison with experimental DRIFTS spectra, allowing an atomistic model of CO surface coverage to be created. DFT adsorption energetics for low CO coverage (θ → 0) suggest the CO binding strength follows the order hollow > bridge > linear, even for dispersion-corrected functionals for sub-nanometre Pd nanoclusters. For a Pd38 nanoparticle, hollow and bridge-bound are energetically similar (hollow ≈ bridge > atop). It is well known that this ordering has not been found at the high coverages used experimentally, wherein atop CO has a much higher population than observed over Pd(111), confirmed by our DRIFTS spectra for Pd nanoparticles supported on a KIT-6 silica, and hence site populations were calculated through a comparison of DFT and spectroscopic data. At high CO coverage (θ = 1), all three adsorbed CO species co-exist on Pd38, and their interdiffusion is thermally feasible at STP. Under such high surface coverages, DFT predicts that bridge-bound CO chains are thermodynamically stable and isoenergetic to an entirely hollow bound Pd/CO system. The Pd38 nanoparticle undergoes a linear (3.5%), isotropic expansion with increasing CO coverage, accompanied by 63 and 30 cm− 1 blue-shifts of hollow and linear bound CO respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A szervezet dolgozói között kiemelt fontosságúak a vezetők, hiszen döntéseikkel, működésükkel a szervezet életét közvetlenül, és sokszor hosszú távra befolyásolják. Annak a vállalatnak, mely megtartani, növelni szeretné piaci pozícióját, különös hangsúlyt kell fektetnie vezetőinek kiválasztására, a későbbiekben pedig fejlesztésére. A szerző cikkében azt mutatja be, hogy a közgazdászképzettséggel rendelkező vezetők iránti munkaerő-piaci kereslet (a HVG-hirdetések elemzése alapján) milyen minőségi jellemzőket mutatott 2000–2009 között. A szerző részletesen ismerteti az eredményeket, melyek értelmezéséhez a Spencer és munkatársai által kidolgozott kompetenciamodell saját továbbfejlesztett változatát használja fel. Kitekintés céljával a magyar eredmények mellett bemutatja, hogy egy más gazdasági struktúrával, kultúrával rendelkező országban hogyan alakultak a vezetőkkel szemben támasztott kompetencia-elvárások. Az eredmények – melyeket más hazai felmérések is alátámasztanak – azt tükrözik, hogy a magyar vezetőkkel szemben támasztott munkaerő-piaci kompetencia-elvárások eltérnek a szakmai várakozásoktól és élesen eltérő képet mutatnak a The Economist hirdetéselemzésének eredményétől. _________ The thesis that the most important factor which determines the competitiveness of future companies is the quality of human resources has received increasingly more emphasis in the literature on management. The managers of organizations have a key role since they can directly influence the life of the organization by their decisions and work, often for a long term. Therefore, a company which intends to maintain or improve its market position should place special emphasis on the selection, and later on the development of its managers. In the present paper the author presents the characteristic features of job market demand for managers with qualifications in economics between 2000 and 2009 (on the basis of the analysis of job advertisements published in the economic weekly paper Heti Világgazdaság). The author gives a detailed analysis of the results using the competence model developed by Spencer et al. and further developed by the author. In addition to the Hungarian results, the paper also provides an overview of how managers are selected in a country with a different economic structure and culture. The results – also supported by other surveys conducted in Hungary – demonstrate that the competence expectations of the job market for Hungarian managers fail to meet professional expectations; the picture is sharply different from what the analysis of job advertisements published in The Economist show, and the competence expectation changed very little, though quite strikingly, over the period under discussion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: The objective of the study is to explore preferences of gastroenterologists for biosimilar drugs in Crohn’s Disease and reveal trade-offs between the perceived risks and benefits related to biosimilar drugs. Method: Discrete choice experiment was carried out involving 51 Hungarian gastroenterologists in May, 2014. The following attributes were used to describe hypothetical choice sets: 1) type of the treatment (biosimilar/originator) 2) severity of disease 3) availability of continuous medicine supply 4) frequency of the efficacy check-ups. Multinomial logit model was used to differentiate between three attitude types: 1) always opting for the originator 2) willing to consider biosimilar for biological-naïve patients only 3) willing to consider biosimilar treatment for both types of patients. Conditional logit model was used to estimate the probabilities of choosing a given profile. Results: Men, senior consultants, working in IBD center and treating more patients are more likely to willing to consider biosimilar for biological-naïve patients only. Treatment type (originator/biosimilar) was the most important determinant of choice for patients already treated with biologicals, and the availability of continuous medicine supply in the case biological-naïve patients. The probabilities of choosing the biosimilar with all the benefits offered over the originator under current reimbursement conditions are 89% vs 11% for new patients, and 44% vs 56% for patients already treated with biological. Conclusions: Gastroenterologists were willing to trade between perceived risks and benefits of biosimilars. The continuous medical supply would be one of the major benefits of biosimilars. However, benefits offered in the scenarios do not compensate for the change from the originator to the biosimilar treatment of patients already treated with biologicals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The most fundamental and challenging function of government is the effective and efficient delivery of services to local taxpayers and businesses. Counties, once known as the “dark continent” of American government, have recently become a major player in the provision of services. Population growth and suburbanization have increased service demands while the counties' role as service provider to incorporated residents has also expanded due to additional federal and state mandates. County governments are under unprecedented pressure and scrutiny to meet citizens' and elected officials' demands for high quality, and equitable delivery of services at the lowest possible cost while contending with anti-tax sentiments, greatly decreased state and federal support, and exceptionally costly and complex health and public safety problems. ^ This study tested the reform government theory proposition that reformed structures of county government positively correlate with efficient service delivery. A county government reformed index was developed for this dissertation comprised of form of government, home-rule status, method of election, number of government jurisdictions, and number of elected officials. The county government reform index and a measure of relative structural fragmentation were used to assess their impact on two measures of service output: mean county road pavement condition and county road maintenance expenditures. The study's multi-level design triangulated results from different data sources and methods of analysis. Data were collected from semi-structured interviews of county officials, secondary archival sources, and a survey of 544 elected and appointed officials from Florida's 67 counties. The results of the three sources of data converged in finding that reformed Florida counties are more likely than unreformed counties to provide better road service and to spend less on road expenditures. The same results were found for unfragmented Florida counties. Because both the county government reform index and the fragmentation variables were specified acknowledging the reform theory as well as elements from the public-choice model, the results help explain contradicting findings in the urban service research. ^ Therefore, as suggested by the corroborated findings of this dissertation, reformed as well as unfragmented counties are better providers of road maintenance service and do so in a less costly manner. These findings hold although the variables were specified to capture theoretical arguments from the consolidated as well as the public-choice theories suggesting a way to advance the debate from the consolidated-fragmented dichotomy of urban governance. ^