979 resultados para Piecewise linear techniques
Resumo:
La faiblesse des muscles respiratoires peut entraîner une dyspnée, un encombrement bronchique et une insuffisance respiratoire potentiellement fatale. L'évaluation de la force musculaire respiratoire s'impose donc dans les affections neuro-musculaires, mais également dans les situations de dyspnée inexpliquée par une première évaluation cardiaque et pulmonaire. À la spirométrie, une faiblesse musculaire est suspectée sur la base de la boucle débit-volume montrant un débit de pointe émoussé et une fin prématurée de l'expiration. Une diminution importante de la capacité vitale en position couchée suggère une paralysie diaphragmatique. La force inspiratoire est mesurée par la pression inspiratoire maximale (PImax) contre une quasi-occlusion des voies aériennes. Ce test relativement difficile est d'interprétation délicate en cas de collaboration insuffisante. La mesure de la pression nasale sniff (SNIP) est une alternative utile, car elle élimine le problème des fuites autour de l'embout buccal et la réalisation du reniflement est facile. De même, la pression trans-diaphragmatique sniff mesure la force du diaphragme au moyen de sondes oesophagienne et gastrique. En cas de collaboration insuffisante, on peut recourir à la stimulation magnétique des nerfs phréniques qui induit une contraction non-volontaire du diaphragme. La force expiratoire est mesurée par la pression expiratoire maximale (PEmax) contre une quasi-occlusion. La force disponible pour tousser est mesurée par la pression gastrique à la toux, ou plus simplement par le débit de pointe à la toux. Chez les patients à risque, la mesure de la force des muscles respiratoires permet d'instaurer à temps une assistance ventilatoire ou à la toux.
Resumo:
El present projecte s'ha dut a terme a l'American Museum of Natural History (AMNH, New York) entre el 31 de Desembre de 2010 i el 30 de Desembre de 2012. L'objectiu del projecte era elucidar la història evolutiva de la mà humana: traçar els canvis evolutius en la seva forma i proporcions que van propiciar la seva estructura moderna que permet als humans manipular amb precisió. El treball realitzat ha inclòs recol•lecció de dades i anàlisis, redacció de resultats i formació en mètodes analítics específics. Durant aquest temps, l'autor a completat la seva de base de dades existent en mesures lineals de la mà a hominoides. També s'han agafat dades del peu; d'aquesta forma ara mateix es compta amb una base de dades amb més de 500 individus, amb més de 200 mesures per cada un. També s'han agafat dades en tres imensions utilitzant un làser escàner. S'han après tècniques de morfometria geomètrica 3D directament dels pioners al camp a l'AMNH. Com a resultat d'aquesta feina s'han produït 10 resums (publicats a congressos internacionals) i 9 manuscrits (molts d'ells ja publicats a revistes internacionals) amb resultats de gran rellevància: La mà humana posseeix unes proporcions relativament primitives, que són més similars a les proporciones que tenien els hominoides fòssils del Miocè que no pas a la dels grans antropomorfs actuals. Els darrers tenen unes mans allargades amb un polzes molt curts que reflexen l'ús de la mà com a eina de suspensió sota les branques. En canvi, els hominoides del Miocè tenien unes mans relativament curtes amb un polze llarg que feien servir per estabilitzar el seu pes quan caminaven per sobre de les branques. Una vegada els primers homínids van aparèixer al final del Miocè (fa uns 6 Ma) i van començar a fer servir el bipedisme com a mitjà més comú de locomoció, les seves mans van ser "alliberades" de les seves funcions locomotores. La selecció natural—ara només treballant en la manipulació—va convertir les proporcions ja existents de la mà d'aquests primats en l'òrgan manipulatori que representa la mà humana avui dia.
Analysis and evaluation of techniques for the extraction of classes in the ontology learning process
Resumo:
This paper analyzes and evaluates, in the context of Ontology learning, some techniques to identify and extract candidate terms to classes of a taxonomy. Besides, this work points out some inconsistencies that may be occurring in the preprocessing of text corpus, and proposes techniques to obtain good terms candidate to classes of a taxonomy.
Resumo:
In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.
Resumo:
Error-correcting codes and matroids have been widely used in the study of ordinary secret sharing schemes. In this paper, the connections between codes, matroids, and a special class of secret sharing schemes, namely, multiplicative linear secret sharing schemes (LSSSs), are studied. Such schemes are known to enable multiparty computation protocols secure against general (nonthreshold) adversaries.Two open problems related to the complexity of multiplicative LSSSs are considered in this paper. The first one deals with strongly multiplicative LSSSs. As opposed to the case of multiplicative LSSSs, it is not known whether there is an efficient method to transform an LSSS into a strongly multiplicative LSSS for the same access structure with a polynomial increase of the complexity. A property of strongly multiplicative LSSSs that could be useful in solving this problem is proved. Namely, using a suitable generalization of the well-known Berlekamp–Welch decoder, it is shown that all strongly multiplicative LSSSs enable efficient reconstruction of a shared secret in the presence of malicious faults. The second one is to characterize the access structures of ideal multiplicative LSSSs. Specifically, the considered open problem is to determine whether all self-dual vector space access structures are in this situation. By the aforementioned connection, this in fact constitutes an open problem about matroid theory, since it can be restated in terms of representability of identically self-dual matroids by self-dual codes. A new concept is introduced, the flat-partition, that provides a useful classification of identically self-dual matroids. Uniform identically self-dual matroids, which are known to be representable by self-dual codes, form one of the classes. It is proved that this property also holds for the family of matroids that, in a natural way, is the next class in the above classification: the identically self-dual bipartite matroids.
Investigation into Improved Pavement Curing Materials and Techniques: Part 2 - Phase III, March 2003
Resumo:
Appropriate curing is important for concrete to obtain the designed properties. This research was conducted to evaluate the curing effects of different curing materials and methods on pavement properties. At present the sprayed curing compound is a common used method for pavement and other concrete structure construction. Three curing compounds were selected for testing. Two different application rates were employed for the white-pigmented liquid curing compounds. The concrete properties of temperature, moisture content, conductivity, and permeability were examined at several test locations. It was found, in this project, that the concrete properties varied with the depth. Of the tests conducted (maturity, sorptivity, permeability, and conductivity), conductivity appears to be the best method to evaluate the curing effects in the field and bears potential for field application. The results indicated that currently approved curing materials in Iowa, when spread uniformly in a single or double application, provide adequate curing protection and meet the goals of the Iowa Department of Transportation. Experimental curing methods can be compared to this method through the use of conductivity testing to determine their application in the field.
Resumo:
Concrete curing is closely related to cement hydration, microstructure development, and concrete performance. Application of a liquid membrane-forming curing compound is among the most widely used curing methods for concrete pavements and bridge decks. Curing compounds are economical, easy to apply, and maintenance free. However, limited research has been done to investigate the effectiveness of different curing compounds and their application technologies. No reliable standard testing method is available to evaluate the effectiveness of curing, especially of the field concrete curing. The present research investigates the effects of curing compound materials and application technologies on concrete properties, especially on the properties of surface concrete. This report presents a literature review of curing technology, with an emphasis on curing compounds, and the experimental results from the first part of this research—lab investigation. In the lab investigation, three curing compounds were selected and applied to mortar specimens at three different times after casting. Two application methods, single- and double-layer applications, were employed. Moisture content, conductivity, sorptivity, and degree of hydration were measured at different depths of the specimens. Flexural and compressive strength of the specimens were also tested. Statistical analysis was conducted to examine the relationships between these material properties. The research results indicate that application of a curing compound significantly increased moisture content and degree of cement hydration and reduced sorptivity of the near-surface-area concrete. For given concrete materials and mix proportions, optimal application time of curing compounds depended primarily upon the weather condition. If a sufficient amount of a high-efficiency-index curing compound was uniformly applied, no double-layer application was necessary. Among all test methods applied, the sorptivity test is the most sensitive one to provide good indication for the subtle changes in microstructure of the near-surface-area concrete caused by different curing materials and application methods. Sorptivity measurement has a close relation with moisture content and degree of hydration. The research results have established a baseline for and provided insight into the further development of testing procedures for evaluation of curing compounds in field. Recommendations are provided for further field study.
Resumo:
A systolic array to implement lattice-reduction-aided lineardetection is proposed for a MIMO receiver. The lattice reductionalgorithm and the ensuing linear detections are operated in the same array, which can be hardware-efficient. All-swap lattice reduction algorithm (ASLR) is considered for the systolic design.ASLR is a variant of the LLL algorithm, which processes all lattice basis vectors within one iteration. Lattice-reduction-aided linear detection based on ASLR and LLL algorithms have very similarbit-error-rate performance, while ASLR is more time efficient inthe systolic array, especially for systems with a large number ofantennas.
Resumo:
Standards for the construction of full-depth patching in portland cement concrete pavement usually require replacement of all deteriorated based materials with crushed stone, up to the bottom of the existing pavement layer. In an effort to reduce the time of patch construction and costs, the Iowa Department of Transportation and the Department of Civil, Construction and Environmental Engineering at Iowa State University studied the use of extra concrete depth as an option for base construction. This report compares the impact of additional concrete patching material depth on rate of strength gain, potential for early opening to traffic, patching costs, and long-term patch performance. This report also compares those characteristics in terms of early setting and standard concrete mixes. The results have the potential to change the method of Portland cement concrete pavement patch construction in Iowa.
Resumo:
STATEMENT OF PROBLEM: Wear of methacrylate artificial teeth resulting in vertical loss is a problem for both dentists and patients. PURPOSE: The purpose of this study was to quantify wear of artificial teeth in vivo and to relate it to subject and tooth variables. MATERIAL AND METHODS: Twenty-eight subjects treated with complete dentures received 2 artificial tooth materials (polymethyl methacrylate (PMMA)/double-cross linked PMMA fillers; 35%/59% (SR Antaris DCL, SR Postaris DCL); experimental 48%/46%). At baseline and after 12 months, impressions of the dentures were poured with improved stone. After laser scanning, the casts were superimposed and matched. Maximal vertical loss (mm) and volumetric loss (mm(3)) were calculated for each tooth and log-transformed to reduce variability. Volumetric loss was related to the occlusally active surface area. Linear mixed models were used to study the influence of the factors jaw, tooth, and material on adjusted (residual) wear values (alpha=.05). RESULTS: Due to drop outs (n=5) and unmatchable casts (n=3), 69% of all teeth were analyzed. Volumetric loss had a strong linear relationship to surface area (P<.001); this was less pronounced for vertical loss (P=.004). The factor showing the highest influence was the subject. Wear was tooth dependent (increasing from incisors to molars). However, these differences diminished once the wear rates were adjusted for occlusal area, and only a few remained significant (anterior versus posterior maxillary teeth). Another influencing factor was the age of the subject. CONCLUSIONS: Clinical wear of artificial teeth is higher than previously measured or expected. The presented method of analyzing wear of artificial teeth using a laser-scanning device seemed suitable.
Resumo:
An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001.We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling.
Resumo:
Pavement settlement occurring in and around utility cuts is a common problem, resulting in uneven pavement surfaces, annoyance to drivers, and ultimately, further maintenance. A survey of municipal authorities and field and laboratory investigations were conducted to identify the factors contributing to the settlement of utility cut restorations in pavement sections. Survey responses were received from seven cities across Iowa and indicate that utility cut restorations often last less than two years. Observations made during site inspections showed that backfill material varies from one city to another, backfill lift thickness often exceeds 12 inches, and the backfill material is often placed at bulking moisture contents with no Quality control/Quality Assurance. Laboratory investigation of the backfill materials indicate that at the field moisture contents encountered, the backfill materials have collapse potentials up to 35%. Falling Weight Deflectometer (FWD) deflection data and elevation shots indicate that the maximum deflection in the pavement occurs in the area around the utility cut restoration. The FWD data indicate a zone of influence around the perimeter of the restoration extending two to three feet beyond the trench perimeter. The research team proposes moisture control, the use of 65% relative density in a granular fill, and removing and compacting the native material near the ground surface around the trench. Test sections with geogrid reinforcement were also incorporated. The performance of inspected and proposed utility cuts needs to be monitored for at least two more years.
Resumo:
BACKGROUND: Analysis of the first reported complete genome sequence of Bifidobacterium longum NCC2705, an actinobacterium colonizing the gastrointestinal tract, uncovered its proteomic relatedness to Streptomyces coelicolor and Mycobacterium tuberculosis. However, a rapid scrutiny by genometric methods revealed a genome organization totally different from all so far sequenced high-GC Gram-positive chromosomes. RESULTS: Generally, the cumulative GC- and ORF orientation skew curves of prokaryotic genomes consist of two linear segments of opposite slope: the minimum and the maximum of the curves correspond to the origin and the terminus of chromosome replication, respectively. However, analyses of the B. longum NCC2705 chromosome yielded six, instead of two, linear segments, while its dnaA locus, usually associated with the origin of replication, was not located at the minimum of the curves. Furthermore, the coorientation of gene transcription with replication was very low. Comparison with closely related actinobacteria strongly suggested that the chromosome of B. longum was misassembled, and the identification of two pairs of relatively long homologous DNA sequences offers the possibility for an alternative genome assembly proposed here below. By genometric criteria, this configuration displays all of the characters common to bacteria, in particular to related high-GC Gram-positives. In addition, it is compatible with the partially sequenced genome of DJO10A B. longum strain. Recently, a corrected sequence of B. longum NCC2705, with a configuration similar to the one proposed here below, has been deposited in GenBank, confirming our predictions. CONCLUSION: Genometric analyses, in conjunction with standard bioinformatic tools and knowledge of bacterial chromosome architecture, represent fast and straightforward methods for the evaluation of chromosome assembly.
Resumo:
PURPOSE: To compare examination time with radiologist time and to measure radiation dose of computed tomographic (CT) fluoroscopy, conventional CT, and conventional fluoroscopy as guiding modalities for shoulder CT arthrography. MATERIALS AND METHODS: Glenohumeral injection of contrast material for CT arthrography was performed in 64 consecutive patients (mean age, 32 years; age range, 16-74 years) and was guided with CT fluoroscopy (n = 28), conventional CT (n = 14), or conventional fluoroscopy (n = 22). Room times (arthrography, room change, CT, and total examination times) and radiologist times (time the radiologist spent in the fluoroscopy or CT room) were measured. One-way analysis of variance and Bonferroni-Dunn posthoc tests were performed for comparison of mean times. Mean effective radiation dose was calculated for each method with examination data, phantom measurements, and standard software. RESULTS: Mean total examination time was 28.0 minutes for CT fluoroscopy, 28.6 minutes for conventional CT, and 29.4 minutes for conventional fluoroscopy; mean radiologist time was 9.9 minutes, 10.5 minutes, and 9.0 minutes, respectively. These differences were not statistically significant. Mean effective radiation dose was 0.0015 mSv for conventional fluoroscopy (mean, nine sections), 0.22 mSv for CT fluoroscopy (120 kV; 50 mA; mean, 15 sections), and 0.96 mSv for conventional CT (140 kV; 240 mA; mean, six sections). Effective radiation dose can be reduced to 0.18 mSv for conventional CT by changing imaging parameters to 120 kV and 100 mA. Mean effective radiation dose of the diagnostic CT arthrographic examination (140 kV; 240 mA; mean, 25 sections) was 2.4 mSv. CONCLUSION: CT fluoroscopy and conventional CT are valuable alternative modalities for glenohumeral CT arthrography, as examination and radiologist times are not significantly different. CT guidance requires a greater radiation dose than does conventional fluoroscopy, but with adequate parameters CT guidance constitutes approximately 8% of the radiation dose.