867 resultados para Geometry of Fuzzy sets
Resumo:
This paper is a progress report on a research path I first outlined in my contribution to “Words in Context: A Tribute to John Sinclair on his Retirement” (Heffer and Sauntson, 2000). Therefore, I first summarize that paper here, in order to provide the relevant background. The second half of the current paper consists of some further manual analyses, exploring various parameters and procedures that might assist in the design of an automated computational process for the identification of lexical sets. The automation itself is beyond the scope of the current paper.
Resumo:
Two key issues defined the focus of this research in manufacturing plasmid DNA for use In human gene therapy. First, the processing of E.coli bacterial cells to effect the separation of therapeutic plasmid DNA from cellular debris and adventitious material. Second, the affinity purification of the plasmid DNA in a Simple one-stage process. The need arises when considering the concerns that have been recently voiced by the FDA concerning the scalability and reproducibility of the current manufacturing processes in meeting the quality criteria of purity, potency, efficacy, and safety for a recombinant drug substance for use in humans. To develop a preliminary purification procedure, an EFD cross-flow micro-filtration module was assessed for its ability to effect the 20-fold concentration, 6-time diafiltration, and final clarification of the plasmid DNA from the subsequent cell lysate that is derived from a 1 liter E.coli bacterial cell culture. Historically, the employment of cross-flow filtration modules within procedures for harvesting cells from bacterial cultures have failed to reach the required standards dictated by existing continuous centrifuge technologies, frequently resulting in the rapid blinding of the membrane with bacterial cells that substantially reduces the permeate flux. By challenging the EFD module, containing six helical wound tubular membranes promoting centrifugal instabilities known as Dean vortices, with distilled water between the Dean number's of 187Dn and 818Dn,and the transmembrane pressures (TMP) of 0 to 5 psi. The data demonstrated that the fluid dynamics significantly influenced the permeation rate, displaying a maximum at 227Dn (312 Imh) and minimum at 818Dn (130 Imh) for a transmembrane pressure of 1 psi. Numerical studies indicated that the initial increase and subsequent decrease resulted from a competition between the centrifugal and viscous forces that create the Dean vortices. At Dean numbers between 187Dn and 227Dn , the forces combine constructively to increase the apparent strength and influence of the Dean vortices. However, as the Dean number in increases above 227 On the centrifugal force dominates the viscous forces, compressing the Dean vortices into the membrane walls and reducing their influence on the radial transmembrane pressure i.e. the permeate flux reduced. When investigating the action of the Dean vortices in controlling tile fouling rate of E.coli bacterial cells, it was demonstrated that the optimum cross-flow rate at which to effect the concentration of a bacterial cell culture was 579Dn and 3 psi TMP, processing in excess of 400 Imh for 20 minutes (i.e., concentrating a 1L culture to 50 ml in 10 minutes at an average of 450 Imh). The data demonstrated that there was a conflict between the Dean number at which the shear rate could control the cell fouling, and the Dean number at which tile optimum flux enhancement was found. Hence, the internal geometry of the EFD module was shown to sub-optimal for this application. At 579Dn and 3 psi TMP, the 6-fold diafiltration was shown to occupy 3.6 minutes of process time, processing at an average flux of 400 Imh. Again, at 579Dn and 3 psi TMP the clarification of the plasmid from tile resulting freeze-thaw cell lysate was achieved at 120 Iml1, passing 83% (2,5 mg) of the plasmid DNA (6,3 ng μ-1 10.8 mg of genomic DNA (∼23,00 Obp, 36 ng μ-1 ), and 7.2 mg of cellular proteins (5-100 kDa, 21.4 ngμ-1 ) into the post-EFD process stream. Hence the EFD module was shown to be effective, achieving the desired objectives in approximately 25 minutes. On the basis of its ability to intercalate into low molecular weight dsDNA present in dilute cell lysates, and be electrophoresed through agarose, the fluorophore PicoGreen was selected for the development of a suitable dsDNA assay. It was assesseel for its accuracy, and reliability, In determining the concentration and identity of DNA present in samples that were eleclrophoresed through agarose gels. The signal emitted by intercalated PicoGreen was shown to be constant and linear, and that the mobility of the PicaGreen-DNA complex was not affected by the intercalation. Concerning the secondary purification procedure, various anion-exchange membranes were assessed for their ability to capture plasmid DNA from the post-EFD process stream. For a commercially available Sartorius Sartobind Q15 membrane, the reduction in the equilibriumbinding capacity for ctDNA in buffer of increasing ionic demonstrated that DNA was being.adsorbed by electrostatic interactions only. However, the problems associated with fluid distribution across the membrane demonstrated that the membrane housing was the predominant cause of the .erratic breakthrough curves. Consequently, this would need to be rectified before such a membrane could be integrated into the current system, or indeed be scaled beyond laboratory scale. However, when challenged with the process material, the data showed that considerable quantities of protein (1150 μg) were adsorbed preferentially to the plasmid DNA (44 μg). This was also shown for derived Pall Gelman UltraBind US450 membranes that had been functionalised by varying molecular weight poly-L~lysine and polyethyleneimine ligands. Hence the anion-exchange membranes were shown to be ineffective in capturing plasmid DNA from the process stream. Finally, work was performed to integrate a sequence-specific DNA·binding protein into a single-stage DNA chromatography, isolating plasmid DNA from E.coli cells whilst minimising the contamination from genomic DNA and cellular protein. Preliminary work demonstrated that the fusion protein was capable of isolating pUC19 DNA into which the recognition sequence for the fusion-protein had been inserted (pTS DNA) when in the presence of the conditioned process material. Althougth the pTS recognition sequence differs from native pUC19 sequences by only 2 bp, the fusion protein was shown to act as a highly selective affinity ligand for pTS DNA alone. Subsequently, the scale of the process was scaled 25-fold and positioned directly following the EFD system. In conclusion, the integration of the EFD micro-filtration system and zinc-finger affinity purification technique resulted in the capture of approximately 1 mg of plasmid DNA was purified from 1L of E.coli culture in a simple two stage process, resulting in the complete removal of genomic DNA and 96.7% of cellular protein in less than 1 hour of process time.
Resumo:
Retrospective clinical data presents many challenges for data mining and machine learning. The transcription of patient records from paper charts and subsequent manipulation of data often results in high volumes of noise as well as a loss of other important information. In addition, such datasets often fail to represent expert medical knowledge and reasoning in any explicit manner. In this research we describe applying data mining methods to retrospective clinical data to build a prediction model for asthma exacerbation severity for pediatric patients in the emergency department. Difficulties in building such a model forced us to investigate alternative strategies for analyzing and processing retrospective data. This paper describes this process together with an approach to mining retrospective clinical data by incorporating formalized external expert knowledge (secondary knowledge sources) into the classification task. This knowledge is used to partition the data into a number of coherent sets, where each set is explicitly described in terms of the secondary knowledge source. Instances from each set are then classified in a manner appropriate for the characteristics of the particular set. We present our methodology and outline a set of experiential results that demonstrate some advantages and some limitations of our approach. © 2008 Springer-Verlag Berlin Heidelberg.
Resumo:
In the bulge test, a sheet metal specimen is clamped over a circular hole in a die and formed into a bulge by the hydraulic pressure on one side of the specirnen. As the unsupported part of the specimen is deformed in this way, its area is increased, in other words, the material is generally stretched and its thickness generally decreased. The stresses causing this stretching action are the membrane stresses in the shell generated by the hydraulic pressure, in the same way as the rubber in a toy balloon is stretched by the membrane stresses caused by the air inside it. The bulge test is a widely used sheet metal test, to determine the "formability" of sheet materials. Research on this forming process (2)-(15)* has hitherto been almost exclusively confined to predicting the behaviour of the bulged specimen through the constitutive equations (stresses and strains in relation to displacements and shapes) and empirical work hardening characteristics of the material as determined in the tension test. In the present study the approach is reversed; the stresses and strains in the specimen are measured and determined from the geometry of the deformed shell. Thus, the bulge test can be used for determining the stress-strain relationship in the material under actual conditions in sheet metal forming processes. When sheet materials are formed by fluid pressure, the work-piece assumes an approximately spherical shape, The exact nature and magnitude of the deviation from the perfect sphere can be defined and measured by an index called prolateness. The distribution of prolateness throughout the workpiece at any particular stage of the forming process is of fundamental significance, because it determines the variation of the stress ratio on which the mode of deformation depends. It is found. that, before the process becomes unstable in sheet metal, the workpiece is exactly spherical only at the pole and at an annular ring. Between the pole and this annular ring the workpiece is more pointed than a sphere, and outside this ring, it is flatter than a sphere. In the forming of sheet materials, the stresses and hence the incremental strains, are closely related to the curvatures of the workpiece. This relationship between geometry and state of stress can be formulated quantitatively through prolateness. The determination of the magnitudes of prolateness, however, requires special techniques. The success of the experimental work is due to the technique of measuring the profile inclination of the meridional section very accurately. A travelling microscope, workshop protractor and surface plate are used for measurements of circumferential and meridional tangential strains. The curvatures can be calculated from geometry. If, however, the shape of the workpiece is expressed in terms of the current radial (r) and axial ( L) coordinates, it is very difficult to calculate the curvatures within an adequate degree of accuracy, owing to the double differentiation involved. In this project, a first differentiation is, in effect, by-passed by measuring the profile inclination directly and the second differentiation is performed in a round-about way, as explained in later chapters. The variations of the stresses in the workpiece thus observed have not, to the knowledge of the author, been reported experimentally. The static strength of shells to withstand fluid pressure and their buckling strength under concentrated loads, both depend on the distribution of the thickness. Thickness distribution can be controlled to a limited extent by changing the work hardening characteristics of the work material and by imposing constraints. A technique is provided in this thesis for determining accurately the stress distribution, on which the strains associated with thinning depend. Whether a problem of controlled thickness distribution is tackled by theory, or by experiments, or by both combined, the analysis in this thesis supplies the theoretical framework and some useful experimental techniques for the research applied to particular problems. The improvement of formability by allowing draw-in can also be analysed with the same theoretical and experimental techniques. Results on stress-strain relationships are usually represented by single stress-strain curves plotted either between one stress and one strain (as in the tension or compression tests) or between the effective stress and effective strain, as in tests on tubular specimens under combined tension, torsion and internal pressure. In this study, the triaxial stresses and strains are plotted simultaneously in triangular coordinates. Thus, both stress and strain are represented by vectors and the relationship between them by the relationship between two vector functions. From the results so obtained, conclusions are drawn on both the behaviour and the properties of the material in the bulge test. The stress ratios are generally equal to the strain-rate ratios (stress vectors collinear with incremental strain vectors) and the work-hardening characteristics, which apply only to the particular strain paths are deduced. Plastic instability of the material is generally considered to have been reached when the oil pressure has attained its maximum value so that further deformation occurs under a constant or lower pressure. It is found that the instability regime of deformation has already occurred long before the maximum pressure is attained. Thus, a new concept of instability is proposed, and for this criterion, instability can occur for any type of pressure growth curves.
Resumo:
Changes in modern structural design have created a demand for products which are light but possess high strength. The objective is a reduction in fuel consumption and weight of materials to satisfy both economic and environmental criteria. Cold roll forming has the potential to fulfil this requirement. The bending process is controlled by the shape of the profile machined on the periphery of the rolls. A CNC lathe can machine complicated profiles to a high standard of precision, but the expertise of a numerical control programmer is required. A computer program was developed during this project, using the expert system concept, to calculate tool paths and consequently to expedite the procurement of the machine control tapes whilst removing the need for a skilled programmer. Codifying the expertise of a human and the encapsulation of knowledge within a computer memory, destroys the dependency on highly trained people whose services can be costly, inconsistent and unreliable. A successful cold roll forming operation, where the product is geometrically correct and free from visual defects, is not easy to attain. The geometry of the sheet after travelling through the rolling mill depends on the residual strains generated by the elastic-plastic deformation. Accurate evaluation of the residual strains can provide the basis for predicting the geometry of the section. A study of geometric and material non-linearity, yield criteria, material hardening and stress-strain relationships was undertaken in this research project. The finite element method was chosen to provide a mathematical model of the bending process and, to ensure an efficient manipulation of the large stiffness matrices, the frontal solution was applied. A series of experimental investigations provided data to compare with corresponding values obtained from the theoretical modelling. A computer simulation, capable of predicting that a design will be satisfactory prior to the manufacture of the rolls, would allow effort to be concentrated into devising an optimum design where costs are minimised.
Resumo:
The most perfectly structured metal surface observed in practice is that of a field evaporated field-ion microscope specimen. This surface has been characterised by adopting various optical analogue techniques. Hence a relationship has been determined between the structure of a single plane on the surface of a field-ion emitter and the geometry of a binary zone plate. By relating the known focussing properties of such a zone plate to those obtained from the projected images of such planes in a field-ion micrograph, it is possible to extract new information regarding the local magnification of the image. Further to this, it has been shown that the entire system of planes comprising the field-ion imaging surface may be regarded as a moire pattern formed between over-lapping zone plates. The properties of such moire zone plates are first established in an analysis of the moire pattern formed between zone plates on a flat surface. When these ideas are applied to the field-ion image it becomes possible to deduce further information regarding the precise topography of the emitter. It has also become possible to simulate differently proJected field-ion images by overlapping suitably aberrated zone plates. Low-energy ion bombardment is an essential preliminary to much surface research as a means of producing chemically clean surfaces. Hence it is important to know the nature and distribution of the resultant lattice damage, and the extent to which it may be removed by annealing. The field-ion microscope has been used to investigate such damage because its characterisation lies on the atomic scale. The present study is concerned with the in situ sputtering of tungsten emitters using helium, neon, argon and xenon ions with energies in the range 100eV to 1keV, together with observations of the effect of annealing. The relevance of these results to surface cleaning schedules is discussed.
Resumo:
The decomposition of drugs in the solid state has been studied using aspirin and salsalate as models. The feasibility of using suspension systems for predicting the stability of these drugs in the solid state has been investigated.. It has been found that such systems are inappropriate in defining the effect of excipients on 'the decomposition of the active drug due to chqnges in the degradation pathway. Using a high performance liquid chromatographic method, magnesium stearate was shown to induce the formation of potentlally immunogenic products in aspirin powders. These products which included salicylsalicylic acid .and acetylsalicyclsalicylic acid were not detected in aspirin suspensions which had undergone the same extent of decomposition. By studying the effect of pH and of added excipients on the rate of decomposition of aspirin in suspension systems, it has been shown that excipients such as magnesium stearate containing magnesium oxide, most probably enhance the decomposition of both aspirin and salsalate by alkalinising the aqueous phase. In the solid state, pH effects produced by excipients appear to be relatively unimportant. Evidence is presented to suggest that the critical parameter is a depression in melting point induced by: the added excipient. Microscopical examination in fact showed the formation of clear liquid layers in aspirin samples containing added magnesium stearate but not in control samples. Kinetic equations which take into account both the diffusive barrier presented by the liquid films and the. geometry of the aspirin crystals were developed. Fitting of the .experimental data to these equations showed good agreement. with the postulated theory. Monitorjng of weight issues during the decomposition of aspirin revealed that in the solid systems studied where the bulk of the decomposition product sublimes, it is possible to estimate the extent of degradation from the residual weight, provided the initial weight is known. The corollary is that in such open systems, monitoring of decomposition products is inadequate for assessing the extent of decomposition. In addition to the magnesium stearate-aspirin system, mapyramine maleate-aspirin mixtures were used to model interactive systems. Work carried out in an attempt to stabilise such systems included microencapsulation and film coating. The protection obtained was dependent on the interactive species used. Gelatin for example appeared to stabilise aspirin against the adverse effects of magnesium stearate but increased its decomposition in the presence of mapyramine maleate.
Resumo:
The mechanism of "Helical Interference" in milled slots is examined and a coherent theory for the geometry of such surfaces is presented. An examination of the relevant literature shows a fragmented approach to the problem owing to its normally destructive nature, so a complete analysis is developed for slots of constant lead, thus giving a united and exact theory for many different setting parameters and a range of cutter shapes. For the first time, a theory is developed to explain the "Interference Surface" generated in variable lead slots for cylindrical work and attention is drawn to other practical surfaces, such as cones, where variable leads are encountered. Although generally outside the scope of this work, an introductory analysis of these cases is considered in order to develop the cylindrical theory. Special emphasis is laid upon practical areas where the interference mechanism can be used constructively and its application as the rake face of a cutting tool is discussed. A theory of rake angle for such cutting tools is given for commonly used planes, and relative variations in calculated rake angle between planes is examined. Practical tests are conducted to validate both constant lead and variable lead theories and some design improvements to the conventional dividing head are suggested in order to manufacture variable lead workpieces, by use of a "superposed" rotation. A prototype machine is manufactured and its kinematic principle given for both linear and non-linearly varying superposed rotations. Practical workpieces of the former type are manufactured and compared with analytical predictions,while theoretical curves are generated for non-linear workpieces and then compared with those of linear geometry. Finally suggestions are made for the application of these principles to the manufacture of spiral bevel gears, using the "Interference Surface" along a cone as the tooth form.
Resumo:
The research concerns the development and application of an analytical computer program, SAFE-ROC, that models material behaviour and structural behaviour of a slender reinforced concrete column that is part of an overall structure and is subjected to elevated temperatures as a result of exposure to fire. The analysis approach used in SAFE-RCC is non-linear. Computer calculations are used that take account of restraint and continuity, and the interaction of the column with the surrounding structure during the fire. Within a given time step an iterative approach is used to find a deformed shape for the column which results in equilibrium between the forces associated with the external loads and internal stresses and degradation. Non-linear geometric effects are taken into account by updating the geometry of the structure during deformation. The structural response program SAFE-ROC includes a total strain model which takes account of the compatibility of strain due to temperature and loading. The total strain model represents a constitutive law that governs the material behaviour for concrete and steel. The material behaviour models employed for concrete and steel take account of the dimensional changes caused by the temperature differentials and changes in the material mechanical properties with changes in temperature. Non-linear stress-strain laws are used that take account of loading to a strain greater than that corresponding to the peak stress of the concrete stress-strain relation, and model the inelastic deformation associated with unloading of the steel stress-strain relation. The cross section temperatures caused by the fire environment are obtained by a preceding non-linear thermal analysis, a computer program FIRES-T.
Resumo:
The work described in this thesis is an attempt to elucidate the relationships between the pore system and a number of engineering properties of hardened cement paste, particularly tensile strength and resistances to carbonation and ionic penetration. By examining aspects such as the rate of carbonisation, the pore size distribution, the concentration of ions in the pore solution and the phase composition of cement pastes, relationships between the pore system (pores and pore solution) and the resistance to carbonation were investigated. The study was carried out in two parts. First, cement pastes with different pore systems were compared, whilst secondly comparisons were made between the pore systems of cement pastes with different degrees of carbonation. Relationships between the pore structure and ionic penetration were studied by comparing kinetic data relating to the diffusion of various ions in cement pastes with different pore systems. Diffusion coefficients and activation energies for the diffusion process of Cl- and Na+ ions in carbonated and non-carbonated cement pastes were determined by a quasi-steady state technique. The effect of the geometry of pores on ionic diffusion was studied by comparing the mechanisms of ionic diffusion for ions with different radii. In order to investigate the possible relationship between tensile strength and macroporosity, cement paste specimens with cross sectional areas less than 1mm2 were produced so that the chance of a macropore existing within them was low. The tensile strengths of such specimens were then compared with those of larger specimens.
Resumo:
The thesis provides a comparative study of both sedimentology and diagenesis of Lower Permian (Rotliegend) strata, onshore and offshore U.K. (Southern North Sea). Onshore formations studied include the Bridgnorth, Penrith and Hopeman Sandstone, and are dominated by aeolian facies, with lesser amounts of interbedded fluvial sediments. Aeolian and fluvial strata in onshore basins typically grade laterally into alluvial fan breccias at basin margins. Onshore basins represent proximal examples of Rotliegend desert sediments. The Leman Sandstone Formation of the Ravenspurn area in the Southern North Sea displays a variety of facies indicative of a distal sedimentological setting; Aeolian, fluvial, sabkha, and playa lake sediments all being present. "Sheet-like" geometry of stratigraphical units within the Leman Sandstone, and alternation of fluvial and aeolian deposition was climatically controlled. Major first order bounding surfaces are laterally extensive and were produced by lacustrine transgression and regression from the north-west. Diagenesis within Permian strata was studied using standard petrographic microscopy, scanning electron microscopy, cold cathodo-Iuminescence, X-ray diffraction clay analysis, X-ray fluorescence spectroscopy, fluid inclusion microthermometry, and K-Ar dating of illites. The diagenesis of Permian sediments within onshore basins is remarkably similar, and a paragenetic sequence of early haematite, illitic clays, feldspar, kaolinite, quartz and late calcite is observed. In the Leman Sandstone formation, authigenic mineralogy is complex and includes early quartz, sulphates and dolomite, chlorite, kaolinite, late quartz, illite and siderite. Primary lithological variation, facies type, and the interdigitation and location of facies within a basin are important initial controls upon diagenesis. Subsequently, burial history, structure, the timing of gas emplacement, and the nature of sediments within underlying formations may also exersize significant controls upon diagenesis within Rotliegend strata.
Resumo:
The present thesis investigates mode related aspects in biology lecture discourse and attempts to identify the position of this variety along the spontaneous spoken versus planned written language continuum. Nine lectures (of 43,000 words) consisting of three sets of three lectures each, given by the three lecturers at Aston University, make up the corpus. The indeterminacy of the results obtained from the investigation of grammatical complexity as measured in subordination motivates the need to take the analysis beyond sentence level to the study of mode related aspects in the use of sentence-initial connectives, sub-topic shifting and paraphrase. It is found that biology lecture discourse combines features typical of speech and writing at sentence as well as discourse level: thus, subordination is more used than co-ordination, but one degree complexity sentence is favoured; some sentence initial connectives are only found in uses typical of spoken language but sub-topic shift signalling (generally introduced by a connective) typical of planned written language is a major feature of the lectures; syntactic and lexical revision and repetition, interrupted structures are found in the sub-topic shift signalling utterance and paraphrase, but the text is also amenable to analysis into sentence like units. On the other hand, it is also found that: (1) while there are some differences in the use of a given feature, inter-speaker variation is on the whole not significant; (2) mode related aspects are often motivated by the didactic function of the variety; and (3) the structuring of the text follows a sequencing whose boundaries are marked by sub-topic shifting and the summary paraphrase. This study enables us to draw four theoretical conclusions: (1) mode related aspects cannot be approached as a simple dichotomy since a combination of aspects of both speech and writing are found in a given feature. It is necessary to go to the level of textual features to identify mode related aspects; (2) homogeneity is dominant in this sample of lectures which suggests that there is a high level of standardization in this variety; (3) the didactic function of the variety is manifested in some mode related aspects; (4) the features studied play a role in the structuring of the text.
Resumo:
Quantitative structure-activity relationship (QSAR) analysis is a cornerstone of modern informatics. Predictive computational models of peptide-major histocompatibility complex (MHC)-binding affinity based on QSAR technology have now become important components of modern computational immunovaccinology. Historically, such approaches have been built around semiqualitative, classification methods, but these are now giving way to quantitative regression methods. We review three methods--a 2D-QSAR additive-partial least squares (PLS) and a 3D-QSAR comparative molecular similarity index analysis (CoMSIA) method--which can identify the sequence dependence of peptide-binding specificity for various class I MHC alleles from the reported binding affinities (IC50) of peptide sets. The third method is an iterative self-consistent (ISC) PLS-based additive method, which is a recently developed extension to the additive method for the affinity prediction of class II peptides. The QSAR methods presented here have established themselves as immunoinformatic techniques complementary to existing methodology, useful in the quantitative prediction of binding affinity: current methods for the in silico identification of T-cell epitopes (which form the basis of many vaccines, diagnostics, and reagents) rely on the accurate computational prediction of peptide-MHC affinity. We have reviewed various human and mouse class I and class II allele models. Studied alleles comprise HLA-A*0101, HLA-A*0201, HLA-A*0202, HLA-A*0203, HLA-A*0206, HLA-A*0301, HLA-A*1101, HLA-A*3101, HLA-A*6801, HLA-A*6802, HLA-B*3501, H2-K(k), H2-K(b), H2-D(b) HLA-DRB1*0101, HLA-DRB1*0401, HLA-DRB1*0701, I-A(b), I-A(d), I-A(k), I-A(S), I-E(d), and I-E(k). In this chapter we show a step-by-step guide into predicting the reliability and the resulting models to represent an advance on existing methods. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made are freely available online at the URL http://www.jenner.ac.uk/MHCPred.
Resumo:
Recently, we have developed the hierarchical Generative Topographic Mapping (HGTM), an interactive method for visualization of large high-dimensional real-valued data sets. In this paper, we propose a more general visualization system by extending HGTM in three ways, which allows the user to visualize a wider range of data sets and better support the model development process. 1) We integrate HGTM with noise models from the exponential family of distributions. The basic building block is the Latent Trait Model (LTM). This enables us to visualize data of inherently discrete nature, e.g., collections of documents, in a hierarchical manner. 2) We give the user a choice of initializing the child plots of the current plot in either interactive, or automatic mode. In the interactive mode, the user selects "regions of interest," whereas in the automatic mode, an unsupervised minimum message length (MML)-inspired construction of a mixture of LTMs is employed. The unsupervised construction is particularly useful when high-level plots are covered with dense clusters of highly overlapping data projections, making it difficult to use the interactive mode. Such a situation often arises when visualizing large data sets. 3) We derive general formulas for magnification factors in latent trait models. Magnification factors are a useful tool to improve our understanding of the visualization plots, since they can highlight the boundaries between data clusters. We illustrate our approach on a toy example and evaluate it on three more complex real data sets. © 2005 IEEE.
Resumo:
Health care organizations must continuously improve their productivity to sustain long-term growth and profitability. Sustainable productivity performance is mostly assumed to be a natural outcome of successful health care management. Data envelopment analysis (DEA) is a popular mathematical programming method for comparing the inputs and outputs of a set of homogenous decision making units (DMUs) by evaluating their relative efficiency. The Malmquist productivity index (MPI) is widely used for productivity analysis by relying on constructing a best practice frontier and calculating the relative performance of a DMU for different time periods. The conventional DEA requires accurate and crisp data to calculate the MPI. However, the real-world data are often imprecise and vague. In this study, the authors propose a novel productivity measurement approach in fuzzy environments with MPI. An application of the proposed approach in health care is presented to demonstrate the simplicity and efficacy of the procedures and algorithms in a hospital efficiency study conducted for a State Office of Inspector General in the United States. © 2012, IGI Global.