916 resultados para automatic test case generation
Resumo:
Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.
Resumo:
For the last 2 decades, supertree reconstruction has been an active field of research and has seen the development of a large number of major algorithms. Because of the growing popularity of the supertree methods, it has become necessary to evaluate the performance of these algorithms to determine which are the best options (especially with regard to the supermatrix approach that is widely used). In this study, seven of the most commonly used supertree methods are investigated by using a large empirical data set (in terms of number of taxa and molecular markers) from the worldwide flowering plant family Sapindaceae. Supertree methods were evaluated using several criteria: similarity of the supertrees with the input trees, similarity between the supertrees and the total evidence tree, level of resolution of the supertree and computational time required by the algorithm. Additional analyses were also conducted on a reduced data set to test if the performance levels were affected by the heuristic searches rather than the algorithms themselves. Based on our results, two main groups of supertree methods were identified: on one hand, the matrix representation with parsimony (MRP), MinFlip, and MinCut methods performed well according to our criteria, whereas the average consensus, split fit, and most similar supertree methods showed a poorer performance or at least did not behave the same way as the total evidence tree. Results for the super distance matrix, that is, the most recent approach tested here, were promising with at least one derived method performing as well as MRP, MinFlip, and MinCut. The output of each method was only slightly improved when applied to the reduced data set, suggesting a correct behavior of the heuristic searches and a relatively low sensitivity of the algorithms to data set sizes and missing data. Results also showed that the MRP analyses could reach a high level of quality even when using a simple heuristic search strategy, with the exception of MRP with Purvis coding scheme and reversible parsimony. The future of supertrees lies in the implementation of a standardized heuristic search for all methods and the increase in computing power to handle large data sets. The latter would prove to be particularly useful for promising approaches such as the maximum quartet fit method that yet requires substantial computing power.
Resumo:
Résumé La voie de signalisation de Wnt est extrêmement conservée au cours de l'évolution. Les protéines Wnt sont des molécules sécrétées qui se lient à la famille de récepteurs Frizzled. Cette interaction mène à la stabilisation de la protéine β-caténine, qui va s'accumuler dans le cytoplasme puis migrer dans le noyau où elle peut s'hétérodimériser avec les facteurs de transcription de la famille TCF/LEF. Il a été démontré que cette voie de signalisation joue un rôle important durant la lymphopoïèse et de récents résultats suggèrent un rôle clé de cette voie dans le renouvellement des Cellules Souches Hématopoïétique (CSH). Des études se basant sur un système de surexpression de protéines montrent clairement que la voie Wnt peut influencer l'hématopoïèse. Cependant, le rôle de la protéine β-caténine dans le système hématopoïétique n'a jamais été testé directement. Ce projet de thèse se propose d'étudier la fonction de la protéine β-caténine par sa délétion inductible via le système Cre-loxP. De façon surprenante, nous avons pu démontrer que les progéniteurs de la moelle osseuse, déficients en β-caténine, ne montrent aucune altération dans leur capacité à s'auto-renouveler et/ou à reconstituer toutes les lignées hématopoïétiques (myéloïde, érythroïde et lymphoïde) dans les souris-chimères. De plus, le développement, la survie des thymocytes ainsi que la prolifération des cellules T périphériques induite par un antigène, sont indépendants de β-caténine. Ces résultats suggèrent soit que la protéine β-caténine ne joue pas un rôle primordial dans le système hématopoiétique, soit que son absence pourrait être compensée par une autre protéine. Un candidat privilégié susceptible de se substituer à β-caténine, serait plakoglobine, aussi connu sous le nom de γ-caténine. En effet, ces deux protéines partagent de multiples caractéristiques structurelles. Afin de démontrer que la protéine γ-caténine peut compenser l'absence de β-caténine, nous avons généré des souris dans lesquelles, le système hématopoïétique est déficient pour ces deux protéines. Cette déficience combinée de β- caténine et γ-caténine ne perturbe pas la capacité des Cellules Souche Hématopoïétique-Long Terme (CSH-LT) de se renouveler, par contre elle agit sur un progéniteur précoce déjà différencié de la moelle osseuse. Ces résultats mettent en évidence que la protéine γ-caténine est capable de compenser l'absence de protéine β-caténine dans le système hématopoïétique. Par conséquent, ce travail contribue à une meilleure connaissance de la cascade Wnt dans l'hématopoïèse. Summary The canonical Wnt signal transduction pathway is a developmentally highly conserved. Wnts are secreted molecules which bind to the family of Frizzled receptors in a complex with the low density lipoprotein receptor related protein (LRP-5/6). This initial activation step leads to the stabilization and accumulation of β-catenin, first in the cytoplasm and subsequently in the nucleus where it forms heterodimers with TCF/LEF transcription factor family members. Wnt signalling has been shown to be important during early lymphopoiesis and has more recently, been suggested to be a key player in self-renewal of haematopoietic stem cells (HSCs). Although mostly gain of function studies indicate that components of the Wnt signalling pathway can influence the haematopoietic system, the role of β-catenin has never been directly investigated. The aim of this thesis project is to investigate the putatively critical role of β-catenin in vivo using the Cre-loxP mediated conditional loss of function approach. Surprisingly, β-catenin deficient bone marrow (BM) progenitors arc not impaired in their ability to self-renew and/or to reconstitute all haematopoietic lineages (myeloid, erythroid and lymphoid) in both mixed and straight bone marrow chimeras. In addition, both thymocyte development and survival, and antigen-induced proliferation of peripheral T cells are β- catenin independent. Our results do not necessarily exclude the possibility of an important function for β-catenin mediated Wnt signalling in the haematopoietic system, it rather raises the question that β-catenin is compensated for by another protein. A prime candidate that may take over the function of β-catenin in its absence, is the close relative plakoglobin, also know as γ-catenin. This protein shares multiple structural features with β-catenin. In order to investigate whether γ-catenin can compensate for the loss of β-catenin we have generated mice in which the haematopoietic compartment is deficient for both proteins. Combined deficiency of β-catenin and γ-catenin does not perturb Long Term-Haematopoietic Stem Cells (LT-HSC) self renewal, but affects an already lineage committed progenitor population within the BM. Our results demonstrate that y-catenin can indeed compensate for the loss of β-catenin within the haematopoietie system.
Resumo:
There is a debate on whether an influence of biotic interactions on species distributions can be reflected at macro-scale levels. Whereas the influence of biotic interactions on spatial arrangements is beginning to be studied at local scales, similar studies at macro-scale levels are scarce. There is no example disentangling, from other similarities with related species, the influence of predator-prey interactions on species distributions at macro-scale levels. In this study we aimed to disentangle predator-prey interactions from species distribution data following an experimental approach including a factorial design. As a case of study we selected the short-toed eagle because of its known specialization on certain prey reptiles. We used presence-absence data at a 100 Km2 spatial resolution to extract the explanatory capacity of different environmental predictors (five abiotic and two biotic predictors) on the short-toed eagle species distribution in Peninsular Spain. Abiotic predictors were relevant climatic and topographic variables, and relevant biotic predictors were prey richness and forest density. In addition to the short-toed eagle, we also obtained the predictor's explanatory capacities for i) species of the same family Accipitridae (as a reference), ii) for other birds of different families (as controls) and iii) species with randomly selected presences (as null models). We run 650 models to test for similarities of the short-toed eagle, controls and null models with reference species, assessed by regressions of explanatory capacities. We found higher similarities between the short-toed eagle and other species of the family Accipitridae than for the other two groups. Once corrected by the family effect, our analyses revealed a signal of predator-prey interaction embedded in species distribution data. This result was corroborated with additional analyses testing for differences in the concordance between the distributions of different bird categories and the distributions of either prey or non-prey species of the short-toed eagle. Our analyses were useful to disentangle a signal of predator-prey interactions from species distribution data at a macro-scale. This study highlights the importance of disentangling specific features from the variation shared with a given taxonomic level.
Resumo:
An automatic system was designed to concurrently measure stage and discharge for the purpose of developing stage-discharge ratings and high flow hydrographs on small streams. Stage, or gage height, is recorded by an analog-to-digital recorder and discharge is determined by the constant-rate tracer-dilution method. The system measures flow above a base stage set by the user. To test the effectiveness of the system and its components, eight systems, with a variety of equipment, were installed at crest-stage gaging stations across Iowa. A fluorescent dye, rhodamine-WT, was used as the tracer. Tracer-dilution discharge measurements were made during 14 flow periods at six stations from 1986 through 1988 water years. Ratings were developed at three stations with the aid of these measurements. A loop rating was identified at one station during rapidly-changing flow conditions. Incomplete mixing and dye loss to sediment apparently were problems at some stations. Stage hydrographs were recorded for 38 flows at seven stations. Limited data on background fluorescence during high flows were also obtained.
Resumo:
This research project looked at the economic benefits and costs associated with alternative strategies for abandoning low volume rural highways and bridges. Three test counties in Iowa were studied, each 100 square miles in size: Hamilton County having a high agricultural tax base and a high percentage of paved roads and few bridges; Shelby County having a relatively low agricultural tax base, hilly terrain and a low percentage of paved road and many bridges; and Linn County having a high agricultural tax base, a high percentage of paved roads and a large number of non-farm households. A questionnaire survey was undertaken to develop estimates of farm and household travel patterns. Benefits and costs associated with the abandonment of various segments of rural highway and bridge mileages in each county were calculated. "Benefits" calculated were reduced future reconstruction and maintenance costs, whereas "costs" were the added cost of travel resulting from the reduced mileage. Some of the findings suggest limited cost savings from abandonment of county roads with no property access in areas with large non-farm rural population; relatively high cost savings from the abandonment of roads with no property access in areas with small rural population; and the largest savings from the conversion of public dead-end gravel roads with property or residence accesses to private drives.
Resumo:
The present study explores the statistical properties of a randomization test based on the random assignment of the intervention point in a two-phase (AB) single-case design. The focus is on randomization distributions constructed with the values of the test statistic for all possible random assignments and used to obtain p-values. The shape of those distributions is investigated for each specific data division defined by the moment in which the intervention is introduced. Another aim of the study consisted in testing the detection of inexistent effects (i.e., production of false alarms) in autocorrelated data series, in which the assumption of exchangeability between observations may be untenable. In this way, it was possible to compare nominal and empirical Type I error rates in order to obtain evidence on the statistical validity of the randomization test for each individual data division. The results suggest that when either of the two phases has considerably less measurement times, Type I errors may be too probable and, hence, the decision making process to be carried out by applied researchers may be jeopardized.
Resumo:
The Iowa State Highway Commission purchased a Conrad automatic freeze and thaw machine and placed it in operation during October 1961. There were a few problems, but considering, the many electrical and mechanical devices used in the automatic system it has always functioned quite well. Rapid freezing and thawing of 4"x4"xl8" concrete beams has been conducted primarily in accordance with ASTM C-29l (now ASTM C-666 procedure B) at the rate of one beam per day. Over 4000 beams have been tested since 1961, with determination of the resulting durability factors. Various methods of curing were used and a standard 90 day moist cure was selected. This cure seemed to yield durability factors that correlated very well with ratings of coarse aggregates based on service records. Some concrete beams had been made using the same coarse aggregate and the durability factors compared relatively well with previous tests. Durability factors seemed to yield reasonable results until large variations in durability factors were noted from beams of identical concrete mix proportions in research projects R-234 and R-247. This then presents the question "How reliable is the durability as determined by ASTM C-666?" This question became increasingly more important when a specification requiring a minimum durability factor for P.C. concrete made from coarse aggregates was incorporated into the 1972 Standard Specification for coarse aggregates for concrete.
Resumo:
This paper demonstrates a novel distributed architecture to facilitate the acquisition of Language Resources. We build a factory that automates the stages involved in the acquisition, production, updating and maintenance of these resources. The factory is designed as a platform where functionalities are deployed as web services, which can be combined in complex acquisition chains using workflows. We show a case study, which acquires a Translation Memory for a given pair of languages and a domain using web services for crawling, sentence alignment and conversion to TMX.
Resumo:
Arterial Spin Labeling (ASL) is a method to measure perfusion using magnetically labeled blood water as an endogenous tracer. Being fully non-invasive, this technique is attractive for longitudinal studies of cerebral blood flow in healthy and diseased individuals, or as a surrogate marker of metabolism. So far, ASL has been restricted mostly to specialist centers due to a generally low SNR of the method and potential issues with user-dependent analysis needed to obtain quantitative measurement of cerebral blood flow (CBF). Here, we evaluated a particular implementation of ASL (called Quantitative STAR labeling of Arterial Regions or QUASAR), a method providing user independent quantification of CBF in a large test-retest study across sites from around the world, dubbed "The QUASAR reproducibility study". Altogether, 28 sites located in Asia, Europe and North America participated and a total of 284 healthy volunteers were scanned. Minimal operator dependence was assured by using an automatic planning tool and its accuracy and potential usefulness in multi-center trials was evaluated as well. Accurate repositioning between sessions was achieved with the automatic planning tool showing mean displacements of 1.87+/-0.95 mm and rotations of 1.56+/-0.66 degrees . Mean gray matter CBF was 47.4+/-7.5 [ml/100 g/min] with a between-subject standard variation SD(b)=5.5 [ml/100 g/min] and a within-subject standard deviation SD(w)=4.7 [ml/100 g/min]. The corresponding repeatability was 13.0 [ml/100 g/min] and was found to be within the range of previous studies.
Resumo:
The coupling between topography, waves and currents in the surf zone may selforganize to produce the formation of shore-transverse or shore-oblique sand bars on an otherwise alongshore uniform beach. In the absence of shore-parallel bars, this has been shown by previous studies of linear stability analysis, but is now extended to the finite-amplitude regime. To this end, a nonlinear model coupling wave transformation and breaking, a shallow-water equations solver, sediment transport and bed updating is developed. The sediment flux consists of a stirring factor multiplied by the depthaveraged current plus a downslope correction. It is found that the cross-shore profile of the ratio of stirring factor to water depth together with the wave incidence angle primarily determine the shape and the type of bars, either transverse or oblique to the shore. In the latter case, they can open an acute angle against the current (upcurrent oriented) or with the current (down-current oriented). At the initial stages of development, both the intensity of the instability which is responsible for the formation of the bars and the damping due to downslope transport grow at a similar rate with bar amplitude, the former being somewhat stronger. As bars keep on growing, their finite-amplitude shape either enhances downslope transport or weakens the instability mechanism so that an equilibrium between both opposing tendencies occurs, leading to a final saturated amplitude. The overall shape of the saturated bars in plan view is similar to that of the small-amplitude ones. However, the final spacings may be up to a factor of 2 larger and final celerities can also be about a factor of 2 smaller or larger. In the case of alongshore migrating bars, the asymmetry of the longshore sections, the lee being steeper than the stoss, is well reproduced. Complex dynamics with merging and splitting of individual bars sometimes occur. Finally, in the case of shore-normal incidence the rip currents in the troughs between the bars are jet-like while the onshore return flow is wider and weaker as is observed in nature.
Resumo:
Alzheimer’s disease (AD) is the most prevalent form of progressive degenerative dementia and it has a high socio-economic impact in Western countries, therefore is one of the most active research areas today. Its diagnosis is sometimes made by excluding other dementias, and definitive confirmation must be done trough a post-mortem study of the brain tissue of the patient. The purpose of this paper is to contribute to im-provement of early diagnosis of AD and its degree of severity, from an automatic analysis performed by non-invasive intelligent methods. The methods selected in this case are Automatic Spontaneous Speech Analysis (ASSA) and Emotional Temperature (ET), that have the great advantage of being non invasive, low cost and without any side effects.
Resumo:
On three occasions, unusually high trough plasma concentrations of venlafaxine were measured in a patient phenotyped and genotyped as being an extensive CYP2D6 metabolizer and receiving 450 mg/day of venlafaxine and multiple comedications. Values of 1.54 and of 0.60 mg/l of venlafaxine and O-desmethylvenlafaxine, respectively, were determined in the first blood sample, giving an unusually high venlafaxine to O-desmethylvenlafaxine ratio. This suggests an impaired metabolism of venlafaxine to O-desmethylvenlafaxine, and is most likely due to metabolic interactions with mianserin (240 mg/day) and propranolol (40 mg/day). Concentration of (S)-venlafaxine measured in this blood sample was almost twice as high as (R)-venlafaxine ((S)/(R) ratio: 1.94). At the second blood sampling, after addition of thioridazine (260 mg/day), which is a strong CYP2D6 inhibitor, concentrations of venlafaxine were further increased (2.76 mg/l), and concentrations of O-desmethylvenlafaxine decreased (0.22 mg/l). A decrease of the (S)/(R)-venlafaxine ratio (-20%) suggests a possible stereoselectivity towards the (R)-enantiomer of the enzyme(s) involved in venlafaxine O-demethylation at these high venlafaxine concentrations. At the third blood sampling, after interruption of thioridazine, concentrations of venlafaxine and O-desmethylvenlafaxine were similar to those measured in the first blood sample. This case report shows the importance of performing studies on the effects of either genetically determined or acquired deficiency of metabolism on the kinetics of venlafaxine.
Resumo:
Seasonal variations in ground temperature and moisture content influence the load carrying capacity of pavement subgrade layers. To improve pavement performance, pavement design guidelines require knowledge of environmental factors and subgrade stiffness relationships. As part of this study, in-ground instrumentation was installed in the pavement foundation layers of a newly constructed section along US Highway 20 near Fort Dodge, Iowa, to monitor the seasonal variations in temperature, frost depth, groundwater levels, and moisture regime. Dynamic cone penetrometer (DCP), nuclear gauge, and Clegg hammer tests were performed at 64 test points in a 6-ft x 6-ft grid pattern to characterize the subgrade stiffness properties (i.e., resilient modulus) prior to paving. The purpose of this paper is to present the field instrumentation results and the observed changes in soil properties due to seasonal environmental effects.
Resumo:
The value of earmarks as an efficient means of personal identification is still subject to debate. It has been argued that the field is lacking a firm systematic and structured data basis to help practitioners to form their conclusions. Typically, there is a paucity of research guiding as to the selectivity of the features used in the comparison process between an earmark and reference earprints taken from an individual. This study proposes a system for the automatic comparison of earprints and earmarks, operating without any manual extraction of key-points or manual annotations. For each donor, a model is created using multiple reference prints, hence capturing the donor within source variability. For each comparison between a mark and a model, images are automatically aligned and a proximity score, based on a normalized 2D correlation coefficient, is calculated. Appropriate use of this score allows deriving a likelihood ratio that can be explored under known state of affairs (both in cases where it is known that the mark has been left by the donor that gave the model and conversely in cases when it is established that the mark originates from a different source). To assess the system performance, a first dataset containing 1229 donors elaborated during the FearID research project was used. Based on these data, for mark-to-print comparisons, the system performed with an equal error rate (EER) of 2.3% and about 88% of marks are found in the first 3 positions of a hitlist. When performing print-to-print transactions, results show an equal error rate of 0.5%. The system was then tested using real-case data obtained from police forces.