931 resultados para Polyhedral sets
Resumo:
Minimal perfect hash functions are used for memory efficient storage and fast retrieval of items from static sets. We present an infinite family of efficient and practical algorithms for generating order preserving minimal perfect hash functions. We show that almost all members of the family construct space and time optimal order preserving minimal perfect hash functions, and we identify the one with minimum constants. Members of the family generate a hash function in two steps. First a special kind of function into an r-graph is computed probabilistically. Then this function is refined deterministically to a minimal perfect hash function. We give strong theoretical evidence that the first step uses linear random time. The second step runs in linear deterministic time. The family not only has theoretical importance, but also offers the fastest known method for generating perfect hash functions.
Resumo:
Test templates and a test template framework are introduced as useful concepts in specification-based testing. The framework can be defined using any model-based specification notation and used to derive tests from model-based specifications-in this paper, it is demonstrated using the Z notation. The framework formally defines test data sets and their relation to the operations in a specification and to other test data sets, providing structure to the testing process. Flexibility is preserved, so that many testing strategies can be used. Important application areas of the framework are discussed, including refinement of test data, regression testing, and test oracles.
Resumo:
Aim: To look at the characteristics of Postgraduate Hospital Educational Environment Measure (PHEEM) using data from the UK, Brazil, Chile and the Netherlands, and to examine the reliability and characteristics of PHEEM, especially how the three PHEEM subscales fitted with factors derived statistically from the data sets. Methods: Statistical analysis of PHEEM scores from 1563 sets of data, using reliability analysis, exploratory factor analysis and correlations of factors derived with the three defined PHEEM subscales. Results: PHEEM was very reliable with an overall Cronbach`s alpha of 0.928. Three factors were derived by exploratory factor analysis. Factor One correlated most strongly with the teaching subscale (R=0.802), Factor Two correlated most strongly with the role autonomy subscale (R=0.623) and Factor Three correlated most strongly with the social support subscale (R=0.538). Conclusions: PHEEM is a multi-dimensional instrument. Overall, it is very reliable. There is a good fit of the three defined subscales, derived by qualitative methods, with the three principal factors derived from the data by exploratory factor analysis.
Resumo:
In this paper use consider the problem of providing standard errors of the component means in normal mixture models fitted to univariate or multivariate data by maximum likelihood via the EM algorithm. Two methods of estimation of the standard errors are considered: the standard information-based method and the computationally-intensive bootstrap method. They are compared empirically by their application to three real data sets and by a small-scale Monte Carlo experiment.
Resumo:
Computer modelling has shown that electrical characteristics of individual pixels may be extracted from within multiple-frequency electrical impedance tomography (MFEIT) images formed using a reference data set obtained from a purely resistive, homogeneous medium. In some applications it is desirable to extract the electrical characteristics of individual pixels from images where a purely resistive, homogeneous reference data set is not available. One such application of the technique of MFEIT is to allow the acquisition of in vivo images using reference data sets obtained from a non-homogeneous medium with a reactive component. However, the reactive component of the reference data set introduces difficulties with the extraction of the true electrical characteristics from the image pixels. This study was a preliminary investigation of a technique to extract electrical parameters from multifrequency images when the reference data set has a reactive component. Unlike the situation in which a homogenous, resistive data set is available, it is not possible to obtain the impedance and phase information directly from the image pixel values of the MFEIT images data set, as the phase of the reactive reference is not known. The method reported here to extract the electrical characteristics (the Cole-Cole plot) initially assumes that this phase angle is zero. With this assumption, an impedance spectrum can be directly extracted from the image set. To obtain the true Cole-Cole plot a correction must be applied to account for the inherent rotation of the extracted impedance spectrum about the origin, which is a result of the assumption. This work shows that the angle of rotation associated with the reactive component of the reference data set may be determined using a priori knowledge of the distribution of frequencies of the Cole-Cole plot. Using this angle of rotation, the true Cole-Cole plot can be obtained from the impedance spectrum extracted from the MFEIT image data set. The method was investigated using simulated data, both with and without noise, and also for image data obtained in vitro. The in vitro studies involved 32 logarithmically spaced frequencies from 4 kHz up to 1 MHz and demonstrated that differences between the true characteristics and those of the impedance spectrum were reduced significantly after application of the correction technique. The differences between the extracted parameters and the true values prior to correction were in the range from 16% to 70%. Following application of the correction technique the differences were reduced to less than 5%. The parameters obtained from the Cole-Cole plot may be useful as a characterization of the nature and health of the imaged tissues.
Resumo:
Background: Studies have investigated the influence of neuromuscular electrostimulation on the exercise/muscle capacity of patients with heart failure (HF), but the hemodynamic overload has never been investigated. The aim of our study was to evaluate the heart rate (HR), systolic and diastolic blood pressures in one session of strength exercises with and without neuromuscular electrostimulation (quadriceps) in HF patients and in healthy subjects. Methods: Ten (50% male) HF patients and healthy subjects performed three sets of eight repetitions with and without neuromuscular electrostimulation randomly, with one week between sessions. Throughout, electromyography was performed to guarantee the electrostimulation was effective. The hemodynamic variables were measured at rest, again immediately after the end of each set of exercises, and during the recovery period. Results: Systolic and diastolic blood pressures did not change during each set of exercises among either the HF patients or the controls. Without electrostimulation: among the controls, the HR corresponding to the first (85 +/- 13 bpm, p = 0.002), second (84 +/- 10 bpm, p < 0.001), third (89 +/- 17, p < 0.001) sets and recuperation (83 +/- 16 bpm, p = 0.012) were different compared to the resting HR (77 bpm). Moreover, the recuperation was different to the third set (0.018). Among HF patients, the HR corresponding to the first (84 +/- 9 bpm, p = 0.041) and third (84 +/- 10 bpm, p = 0.036) sets were different compared to the resting HR (80 +/- 7 bpm), but this increase of 4 bpm is clinically irrelevant to HF. With electrostimulation: among the controls, the HR corresponding to the third set (84 +/- 9 bpm) was different compared to the resting HR (80 +/- 7 bmp, p = 0.016). Among HF patients, there were no statistical differences between the sets. The procedure was well tolerated and no subjects reported muscle pain after 24 hours. Conclusions: One session of strength exercises with and without neuromuscular electrostimulation does not promote a hemodynamic overload in HF patients. (Cardiol J 2011; 18,1: 39-46)
Resumo:
In order to analyse the effect of modelling assumptions in a formal, rigorous way, a syntax of modelling assumptions has been defined. The syntax of modelling assumptions enables us to represent modelling assumptions as transformations acting on the set of model equations. The notion of syntactical correctness and semantical consistency of sets of modelling assumptions is defined and methods for checking them are described. It is shown on a simple example how different modelling assumptions act on the model equations and their effect on the differential index of the resulted model is also indicated.
Resumo:
Purpose: To evaluate the changes over time in the pattern and extent of parenchymal abnormalities in asbestos-exposed workers after cessation of exposure and to compare 3 proposed semiquantitative methods with a careful side-by-side comparison of the initial and the follow-Lip computed tomography (CT) images. Materials and Methods: The study included 52 male asbestos workers (mean age SD, 62.2y +/- 8.2) who had baseline high-resolution CT after cessation of exposure and follow-up CT 3 to 5 years later. Two independent thoracic radiologists quantified the findings according to the scoring systems proposed by Huuskonen, Gamsu, and Sette and then did a side-by-side comparison of the 2 sets of scans without awareness of the dates of the CT scans. Results: There was no difference in the prevalence of the 2 most common parenchymal abnormalities (centrilobular small dotlike or branching opacities and interstitial lines) between the initial and follow-up CT scans. Honeycombing (20%) and traction bronchiectasis and bronchiolectasis (50%) were seen more commonly on the follow-up CT than on the initial examination (10% and 33%, respectively) (P = 0.01). Increased extent of parenchymal abnormalities was evident on side-by-side comparison in 42 (81%) patients but resulted in an increase in score in at least 1 semiquantitative system in only 16 (31%) patients (all P > 0.01, signed test). Conclusions: The majority of patients with previous asbestos exposure show evidence of progression of disease on CT at 3 to 5 years follow-up but this progression is usually not detected by the 3 proposed semiquantitative scoring schemes.
Resumo:
The identification, modeling, and analysis of interactions between nodes of neural systems in the human brain have become the aim of interest of many studies in neuroscience. The complex neural network structure and its correlations with brain functions have played a role in all areas of neuroscience, including the comprehension of cognitive and emotional processing. Indeed, understanding how information is stored, retrieved, processed, and transmitted is one of the ultimate challenges in brain research. In this context, in functional neuroimaging, connectivity analysis is a major tool for the exploration and characterization of the information flow between specialized brain regions. In most functional magnetic resonance imaging (fMRI) studies, connectivity analysis is carried out by first selecting regions of interest (ROI) and then calculating an average BOLD time series (across the voxels in each cluster). Some studies have shown that the average may not be a good choice and have suggested, as an alternative, the use of principal component analysis (PCA) to extract the principal eigen-time series from the ROI(s). In this paper, we introduce a novel approach called cluster Granger analysis (CGA) to study connectivity between ROIs. The main aim of this method was to employ multiple eigen-time series in each ROI to avoid temporal information loss during identification of Granger causality. Such information loss is inherent in averaging (e.g., to yield a single ""representative"" time series per ROI). This, in turn, may lead to a lack of power in detecting connections. The proposed approach is based on multivariate statistical analysis and integrates PCA and partial canonical correlation in a framework of Granger causality for clusters (sets) of time series. We also describe an algorithm for statistical significance testing based on bootstrapping. By using Monte Carlo simulations, we show that the proposed approach outperforms conventional Granger causality analysis (i.e., using representative time series extracted by signal averaging or first principal components estimation from ROIs). The usefulness of the CGA approach in real fMRI data is illustrated in an experiment using human faces expressing emotions. With this data set, the proposed approach suggested the presence of significantly more connections between the ROIs than were detected using a single representative time series in each ROI. (c) 2010 Elsevier Inc. All rights reserved.
Resumo:
A new conceptual model for soil pore-solid structure is formalized. Soil pore-solid structure is proposed to comprise spatially abutting elements each with a value which is its membership to the fuzzy set ''pore,'' termed porosity. These values have a range between zero (all solid) and unity (all pore). Images are used to represent structures in which the elements are pixels and the value of each is a porosity. Two-dimensional random fields are generated by allocating each pixel a porosity by independently sampling a statistical distribution. These random fields are reorganized into other pore-solid structural types by selecting parent points which have a specified local region of influence. Pixels of larger or smaller porosity are aggregated about the parent points and within the region of interest by controlled swapping of pixels in the image. This creates local regions of homogeneity within the random field. This is similar to the process known as simulated annealing. The resulting structures are characterized using one-and two-dimensional variograms and functions describing their connectivity. A variety of examples of structures created by the model is presented and compared. Extension to three dimensions presents no theoretical difficulties and is currently under development.
Resumo:
Fogo selvagem (FS), the endemic form of pemphigus foliaceus (PF), is characterized by pathogenic anti-desmoglein 1 (DSG1) autoantibodies. To study the etiology of FS, hybridomas that secrete either IgM or IgG (predominantly IgG1 subclass) autoantibodies were generated from the B cells of eight FS patients and one individual 4 years before FS onset, and the H and L chain V genes of anti-DSG1 autoantibodies were analyzed. Multiple lines of evidence suggest that these anti-DSG1 autoantibodies are antigen selected. First, clonally related sets of anti-DSG1 hybridomas characterize the response in individual FS patients. Second, H and L chain V gene use seems to be biased, particularly among IgG hybridomas, and third, most hybridomas are mutants and exhibit a bias in favor of CDR (complementary determining region) amino acid replacement (R) mutations. Strikingly, pre-FS hybridomas also exhibit evidence of antigen selection, including an overlap in V(H) gene use and shared multiple R mutations with anti-DSG1 FS hybridomas, suggesting selection by the same or a similar antigen. We conclude that the anti-DSG1 response in FS is antigen driven and that selection for mutant anti-DSG1 B cells begins well before the onset of disease.
Resumo:
Fogo selvagem (FS) is mediated by pathogenic, predominantly IgG4, anti-desmoglein 1 (Dsg1) autoantibodies and is endemic in Limao Verde, Brazil. IgG and IgG subclass autoantibodies were tested in a sample of 214 FS patients and 261 healthy controls by Dsg1 ELISA. For model selection, the sample was randomly divided into training (50%), validation (25%), and test (25%) sets. Using the training and validation sets, IgG4 was chosen as the best predictor of FS, with index values above 6.43 classified as FS. Using the test set, IgG4 has sensitivity of 92% (95% confidence interval (95% CI): 82-95%), specificity of 97% (95% CI: 89-100%), and area under the curve of 0.97 ( 95% CI: 0.94-1.00). The IgG4 positive predictive value (PPV) in Limao Verde (3% FS prevalence) was 49%. The sensitivity, specificity, and PPV of IgG anti-Dsg1 were 87, 91, and 23%, respectively. The IgG4-based classifier was validated by testing 11 FS patients before and after clinical disease and 60 Japanese pemphigus foliaceus patients. It classified 21 of 96 normal individuals from a Limao Verde cohort as having FS serology. On the basis of its PPV, half of the 21 individuals may currently have preclinical FS and could develop clinical disease in the future. Identifying individuals during preclinical FS will enhance our ability to identify the etiological agent(s) triggering FS.
Resumo:
Objectives: Questions about reliability of bioimpedance analysis (BIA) in morbidly obese subjects have curtailed its use in this setting, but metabolic implications might reignite the debate. In a prospective study, it was aimed to analyze anthropometric and clinical associations. Methods: Bariatric candidates (n = 94) with or without metabolic syndrome were consecutively investigated. Age was 34.9 +/- 10.4 years (68.1% females), and BMI was 40.8 +/- 4.6 kg m(-2). Methods included single-frequency BIA, anthropometrics, inflammatory indices, and general biochemical profile. Results: Body composition results (water, fat) in females, but not in males, were entirely consistent with the literature. In both genders good association was observed with anthropometrics (BMI, waist circumference), inflammatory indices (ferritin, C-reactive protein) and general biochemical variables. Anthropometric measurements also displayed comparable associations. Multivariate tests including the two sets of measurements indicated no predominance of one method over the other, one complementing the other as metabolic marker. Conclusions: BIA limitations were mostly relevant for males, not females. Despite such discrepancies, good associations with anthropometry were demonstrated for both genders. Correlations with liver enzymes, and indices of protein, carbohydrate, and lipid metabolism could be demonstrated. BIA deserves more investigations concerning liver steatosis and ongoing inflammation, and it could contribute as well, synergistically with anthropometry, to monitor weight loss, body fat shifts, and metabolic risk. Am. J. Hum. Biol. 23: 420-422, 2011. (c) 2011 Wiley-Liss, Inc.
Resumo:
Few studies have focused on the language acquisition of higher multiple birth sets. In this study, the communication skills of 51 triplet children are described. The measures used were: mean length of utterance; type-token ratio; conversational nets; phoneme repertoire; and number of different types of phonological processes used. The data gained were used to compare the communication skills of triplets with those of twins, singletons and normative data available in the literature. Siblings within triplet sets were also compared using language samples obtained from adult-child interactions and when the three children were playing together. The results indicated that the triplets' early communication skills were different from those of both singletons and twins. The triplets' difficulties included delayed syntactic development, limited use of different language functions and delayed phonological development. In contrast, twins' communication profile is characterised by disordered phonological development.