886 resultados para the overlapping distribution analysis.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Predicting future need for water resources has traditionally been, at best, a crude mixture of art and science. This has prevented the evaluation of water need from being carried out in either a consistent or comprehensive manner. This inconsistent and somewhat arbitrary approach to water resources planning led to well publicised premature developments in the 1970's and 1980's but privatisation of the Water Industry, including creation of the Office of Water Services and the National Rivers Authority in 1989, turned the tide of resource planning to the point where funding of schemes and their justification by the Regulators could no longer be assumed. Furthermore, considerable areas of uncertainty were beginning to enter the debate and complicate the assessment It was also no longer appropriate to consider that contingencies would continue to lie solely on the demand side of the equation. An inability to calculate the balance between supply and demand may mean an inability to meet standards of service or, arguably worse, an excessive provision of water resources and excessive costs to customers. United Kingdom Water Industry Research limited (UKWlR) Headroom project in 1998 provided a simple methodology for the calculation of planning margins. This methodology, although well received, was not, however, accepted by the Regulators as a tool sufficient to promote resource development. This thesis begins by considering the history of water resource planning in the UK, moving on to discuss events following privatisation of the water industry post·1985. The mid section of the research forms the bulk of original work and provides a scoping exercise which reveals a catalogue of uncertainties prevalent within the supply-demand balance. Each of these uncertainties is considered in terms of materiality, scope, and whether it can be quantified within a risk analysis package. Many of the areas of uncertainty identified would merit further research. A workable, yet robust, methodology for evaluating the balance between water resources and water demands by using a spreadsheet based risk analysis package is presented. The technique involves statistical sampling and simulation such that samples are taken from input distributions on both the supply and demand side of the equation and the imbalance between supply and demand is calculated in the form of an output distribution. The percentiles of the output distribution represent different standards of service to the customer. The model allows dependencies between distributions to be considered, for improved uncertainties to be assessed and for the impact of uncertain solutions to any imbalance to be calculated directly. The method is considered a Significant leap forward in the field of water resource planning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based on a corpus of English, German, and Polish spoken academic discourse, this article analyzes the distribution and function of humor in academic research presentations. The corpus is the result of a European research cooperation project consisting of 300,000 tokens of spoken academic language, focusing on the genres research presentation, student presentation, and oral examination. The article investigates difference between the German and English research cultures as expressed in the genre of specialist research presentations, and the role of humor as a pragmatic device in their respective contexts. The data is analyzed according to the paradigms of corpus-assisted discourse studies (CADS). The findings show that humor is used in research presentations as an expression of discourse reflexivity. They also reveal a considerable difference in the quantitative distribution of humor in research presentations depending on the educational, linguistic, and cultural background of the presenters, thus confirming the notion of different research cultures. Such research cultures nurture distinct attitudes to genres of academic language: whereas in one of the cultures identified researchers conform with the constraints and structures of the genre, those working in another attempt to subvert them, for example by the application of humor. © 2012 Elsevier B.V.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The atomic-scale structure of Bioglass and the effect of substituting lithium for sodium within these glasses have been investigated using neutron diffraction and solid state magic angle spinning (MAS) NMR. Applying an effective isomorphic substitution difference function to the neutron diffraction data has enabled the Na-O and Li-O nearest-neighbour correlations to be isolated from the overlapping Ca-O, O-(P)-O and O-(Si)-O correlations. These results reveal that Na and Li behave in a similar manner within the glassy matrix and do not disrupt the short range order of the network former. Residual differences are attributed solely to the variation in ionic radius between the two species. Successful simplification of the 2 the difference method has enabled all the nearest neighbour correlations to be deconvolved. The diffraction data provides the first direct experimental evidence of split Na-O nearest-neighbour correlations in these melt quench bioactive glasses, and an analogous splitting of the Li-O correlations. The observed correlations are attributed to the metal ions bonded either to bridging or to non-bridging oxygen atoms. 23Na triple quantum MAS (3QMAS) NMR data corroborates the split Na-O correlations. The structural sites present will be intimately related to the release properties of the glass system in physiological fluids such as plasma and saliva, and hence to the bioactivity of the material. Detailed structural knowledge is therefore a prerequisite for optimizing material design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using a modified deprivation (or poverty) function, in this paper, we theoretically study the changes in poverty with respect to the 'global' mean and variance of the income distribution using Indian survey data. We show that when the income obeys a log-normal distribution, a rising mean income generally indicates a reduction in poverty while an increase in the variance of the income distribution increases poverty. This altruistic view for a developing economy, however, is not tenable anymore once the poverty index is found to follow a pareto distribution. Here although a rising mean income indicates a reduction in poverty, due to the presence of an inflexion point in the poverty function, there is a critical value of the variance below which poverty decreases with increasing variance while beyond this value, poverty undergoes a steep increase followed by a decrease with respect to higher variance. Identifying this inflexion point as the poverty line, we show that the pareto poverty function satisfies all three standard axioms of a poverty index [N.C. Kakwani, Econometrica 43 (1980) 437; A.K. Sen, Econometrica 44 (1976) 219] whereas the log-normal distribution falls short of this requisite. Following these results, we make quantitative predictions to correlate a developing with a developed economy. © 2006 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this piece is to explain how the trust concept fits the overlapping analysis, presenting an example of why discrete categorisation is often unhelpful in understanding the operation of legal concepts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Electrosurgery units are widely employed in modern surgery. Advances in technology have enhanced the safety of these devices, nevertheless, accidental burns are still regularly reported. This study focuses on possible causes of sacral burns as complication of the use of electrosurgery. Burns are caused by local densifications of the current, but the actual pathway of current within patient's body is unknown. Numerical electromagnetic analysis can help in understanding the issue. Methods: To this aim, an accurate heterogeneous model of human body (including seventy-seven different tissues), electrosurgery electrodes, operating table and mattress was build to resemble a typical surgery condition. The patient lays supine on the mattress with the active electrode placed onto the thorax and the return electrode on his back. Common operating frequencies of electrosurgery units were considered. Finite Difference Time Domain electromagnetic analysis was carried out to compute the spatial distribution of current density within the patient's body. A differential analysis by changing the electrical properties of the operating table from a conductor to an insulator was also performed. Results: Results revealed that distributed capacitive coupling between patient body and the conductive operating table offers an alternative path to the electrosurgery current. The patient's anatomy, the positioning and the different electromagnetic properties of tissues promote a densification of the current at the head and sacral region. In particular, high values of current density were located behind the sacral bone and beneath the skin. This did not occur in the case of non-conductive operating table. Conclusion: Results of the simulation highlight the role played from capacitive couplings between the return electrode and the conductive operating table. The concentration of current density may result in an undesired rise in temperature, originating burns in body region far from the electrodes. This outcome is concordant with the type of surgery-related sacral burns reported in literature. Such burns cannot be immediately detected after surgery, but appear later and can be confused with bedsores. In addition, the dosimetric analysis suggests that reducing the capacity coupling between the return electrode and the operating table can decrease or avoid this problem. © 2013 Bifulco et al.; licensee BioMed Central Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In finance literature many economic theories and models have been proposed to explain and estimate the relationship between risk and return. Assuming risk averseness and rational behavior on part of the investor, the models are developed which are supposed to help in forming efficient portfolios that either maximize (minimize) the expected rate of return (risk) for a given level of risk (rates of return). One of the most used models to form these efficient portfolios is the Sharpe's Capital Asset Pricing Model (CAPM). In the development of this model it is assumed that the investors have homogeneous expectations about the future probability distribution of the rates of return. That is, every investor assumes the same values of the parameters of the probability distribution. Likewise financial volatility homogeneity is commonly assumed, where volatility is taken as investment risk which is usually measured by the variance of the rates of return. Typically the square root of the variance is used to define financial volatility, furthermore it is also often assumed that the data generating process is made of independent and identically distributed random variables. This again implies that financial volatility is measured from homogeneous time series with stationary parameters. In this dissertation, we investigate the assumptions of homogeneity of market agents and provide evidence for the case of heterogeneity in market participants' information, objectives, and expectations about the parameters of the probability distribution of prices as given by the differences in the empirical distributions corresponding to different time scales, which in this study are associated with different classes of investors, as well as demonstrate that statistical properties of the underlying data generating processes including the volatility in the rates of return are quite heterogeneous. In other words, we provide empirical evidence against the traditional views about homogeneity using non-parametric wavelet analysis on trading data, The results show heterogeneity of financial volatility at different time scales, and time-scale is one of the most important aspects in which trading behavior differs. In fact we conclude that heterogeneity as posited by the Heterogeneous Markets Hypothesis is the norm and not the exception.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the major problems in the analysis of beams with Moment of Inertia varying along their length, is to find the Fixed End Moments, Stiffness, and Carry-Over Factors. In order to determine Fixed End Moments, it is necessary to consider the non-prismatic member as integrated by a large number of small sections with constant Moment of Inertia, and to find the M/EI values for each individual section. This process takes a lot of time from Designers and Structural Engineers. The object of this thesis is to design a computer program to simplify this repetitive process, obtaining rapidly and effectively the Final Moments and Shears in continuous non-prismatic Beams. For this purpose the Column Analogy and the Moment Distribution Methods of Professor Hardy Cross have been utilized as the principles toward the methodical computer solutions. The program has been specifically designed to analyze continuous beams of a maximum of four spans of any length, integrated by symmetrical members with rectangular cross sections and with rectilinear variation of the Moment of Inertia. Any load or combination of uniform and concentrated loads must be considered. Finally sample problems will be solved with the new Computer Program and with traditional systems, to determine the accuracy and applicability of the Program.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The theoretical construct of control has been defined as necessary (Etzioni, 1965), ubiquitous (Vickers, 1967), and on-going (E. Langer, 1983). Empirical measures, however, have not adequately given meaning to this potent construct, especially within complex organizations such as schools. Four stages of theory-development and empirical testing of school building managerial control using principals and teachers working within the nation's fourth largest district are presented in this dissertation as follows: (1) a review and synthesis of social science theories of control across the literatures of organizational theory, political science, sociology, psychology, and philosophy; (2) a systematic analysis of school managerial activities performed at the building level within the context of curricular and instructional tasks; (3) the development of a survey questionnaire to measure school building managerial control; and (4) initial tests of construct validity including inter-item reliability statistics, principal components analyses, and multivariate tests of significance. The social science synthesis provided support of four managerial control processes: standards, information, assessment, and incentives. The systematic analysis of school managerial activities led to further categorization between structural frequency of behaviors and discretionary qualities of behaviors across each of the control processes and the curricular and instructional tasks. Teacher survey responses (N=486) reported a significant difference between these two dimensions of control, structural frequency and discretionary qualities, for standards, information, and assessments, but not for incentives. The descriptive model of school managerial control suggests that (1) teachers perceive structural and discretionary managerial behaviors under information and incentives more clearly than activities representing standards or assessments, (2) standards are primarily structural while assessments are primarily qualitative, (3) teacher satisfaction is most closely related to the equitable distribution of incentives, (4) each of the structural managerial behaviors has a qualitative effect on teachers, and that (5) certain qualities of managerial behaviors are perceived by teachers as distinctly discretionary, apart from school structure. The variables of teacher tenure and school effectiveness reported significant effects on school managerial control processes, while instructional levels (elementary, junior, and senior) and individual school differences were not found to be significant for the construct of school managerial control.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wind- induced exposure is one of the major forces shaping the geomorphology and biota in coastal areas. The effect of wave exposure on littoral biota is well known in marine environments (Ekebon et al., 2003; Burrows et al., 2008). In the Cabrera Archipelago National Park wave exposure has demostrated to have an effect on the spatial distribution of different stages of E.marginatus (Alvarez et al., 2010). Standarized average wave exposures during 2008 along the Cabrera Archipelago National park coast line were calculated to be applied in studies of littoral species distribution within the archipelago. Average wave exposure (or apparent wave power) was calculated for points located 50 m equidistant on the coastline following the EXA methodology (EXposure estimates for fragmented Archipelagos) (Ekebon et al., 2003). The average wave exposures were standardized from 1 to 100 (minimum and maximum in the area), showing coastal areas with different levels of mea wave exposure during the year. Input wind data (direction and intensity) from 2008 was registered at the Cabrera mooring located north of Cabrera Archipelago. Data were provided by IMEDEA (CSIC-UIB, TMMOS http://www.imedea.uib-csic.es/tmoos/boyas/). This cartography has been developed under the framework of the project EPIMHAR, funded by the National Park's Network (Spanish Ministry of Environment, Maritime and Rural Affairs, reference: 012/2007 ). Part of this work has been developed under the research programs funded by "Fons de Garantia Agrària i Pesquera de les Illes Balears (FOGAIBA)".

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based on a well-established stratigraphic framework and 47 AMS-14C dated sediment cores, the distribution of facies types on the NW Iberian margin is analysed in response to the last deglacial sea-level rise, thus providing a case study on the sedimentary evolution of a high-energy, low-accumulation shelf system. Altogether, four main types of sedimentary facies are defined. (1) A gravel-dominated facies occurs mostly as time-transgressive ravinement beds, which initially developed as shoreface and storm deposits in shallow waters on the outer shelf during the last sea-level lowstand; (2) A widespread, time-transgressive mixed siliceous/biogenic-carbonaceous sand facies indicates areas of moderate hydrodynamic regimes, high contribution of reworked shelf material, and fluvial supply to the shelf; (3) A glaucony-containing sand facies in a stationary position on the outer shelf formed mostly during the last-glacial sea-level rise by reworking of older deposits as well as authigenic mineral formation; and (4) A mud facies is mostly restricted to confined Holocene fine-grained depocentres, which are located in mid-shelf position. The observed spatial and temporal distribution of these facies types on the high-energy, low-accumulation NW Iberian shelf was essentially controlled by the local interplay of sediment supply, shelf morphology, and strength of the hydrodynamic system. These patterns are in contrast to high-accumulation systems where extensive sediment supply is the dominant factor on the facies distribution. This study emphasises the importance of large-scale erosion and material recycling on the sedimentary buildup during the deglacial drowning of the shelf. The presence of a homogenous and up to 15-m thick transgressive cover above a lag horizon contradicts the common assumption of sparse and laterally confined sediment accumulation on high-energy shelf systems during deglacial sea-level rise. In contrast to this extensive sand cover, laterally very confined and maximal 4-m thin mud depocentres developed during the Holocene sea-level highstand. This restricted formation of fine-grained depocentres was related to the combination of: (1) frequently occurring high-energy hydrodynamic conditions; (2) low overall terrigenous input by the adjacent rivers; and (3) the large distance of the Galicia Mud Belt to its main sediment supplier.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Grant support This study was supported by an award (Ref: WHMSB-AU119) from the Translational Medicine Research Collaboration – a consortium made up of the Universities of Aberdeen, Dundee, Edinburgh and Glasgow, the four associated NHS Health Boards (Grampian, Tayside, Lothian and Greater Glasgow & Clyde), Scottish Enterprise and Wyeth. The funder played no part in the design, execution, analysis or publication of this paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background.  Cytomegalovirus (CMV) is a common cause of birth defects and hearing loss in infants and opportunistic infections in the immunocompromised. Previous studies have found higher CMV seroprevalence rates among minorities and among persons with lower socioeconomic status. No studies have investigated the geographic distribution of CMV and its relationship to age, race, and poverty in the community. Methods.  We identified patients from 6 North Carolina counties who were tested in the Duke University Health System for CMV immunoglobulin G. We performed spatial statistical analyses to analyze the distributions of seropositive and seronegative individuals. Results.  Of 1884 subjects, 90% were either white or African American. Cytomegalovirus seropositivity was significantly more common among African Americans (73% vs 42%; odds ratio, 3.31; 95% confidence interval, 2.7-4.1), and this disparity persisted across the life span. We identified clusters of high and low CMV odds, both of which were largely explained by race. Clusters of high CMV odds were found in communities with high proportions of African Americans. Conclusions.  Cytomegalovirus seropositivity is geographically clustered, and its distribution is strongly determined by a community's racial composition. African American communities have high prevalence rates of CMV infection, and there may be a disparate burden of CMV-associated morbidity in these communities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dynamic positron emission tomography (PET) imaging can be used to track the distribution of injected radio-labelled molecules over time in vivo. This is a powerful technique, which provides researchers and clinicians the opportunity to study the status of healthy and pathological tissue by examining how it processes substances of interest. Widely used tracers include 18F-uorodeoxyglucose, an analog of glucose, which is used as the radiotracer in over ninety percent of PET scans. This radiotracer provides a way of quantifying the distribution of glucose utilisation in vivo. The interpretation of PET time-course data is complicated because the measured signal is a combination of vascular delivery and tissue retention effects. If the arterial time-course is known, the tissue time-course can typically be expressed in terms of a linear convolution between the arterial time-course and the tissue residue function. As the residue represents the amount of tracer remaining in the tissue, this can be thought of as a survival function; these functions been examined in great detail by the statistics community. Kinetic analysis of PET data is concerned with estimation of the residue and associated functionals such as ow, ux and volume of distribution. This thesis presents a Markov chain formulation of blood tissue exchange and explores how this relates to established compartmental forms. A nonparametric approach to the estimation of the residue is examined and the improvement in this model relative to compartmental model is evaluated using simulations and cross-validation techniques. The reference distribution of the test statistics, generated in comparing the models, is also studied. We explore these models further with simulated studies and an FDG-PET dataset from subjects with gliomas, which has previously been analysed with compartmental modelling. We also consider the performance of a recently proposed mixture modelling technique in this study.