941 resultados para Algorithms, Properties, the KCube Graphs
Resumo:
Diplomityössä käsitellään Nokia Mobile Phonesin matkapuhelimien käyttöliittymäohjelmistojen suunnittelu-ja testausympäristön kehitystä. Ympäristöön lisättiin kaksi ohjelmistomodulia avustamaan simulointia ja versionhallintaa. Visualisointityökalulla matkapuhelimen toiminta voidaan jäljittää suunnittelu- kaavioihin tilasiirtyminä, kun taas vertailusovelluksella kaavioiden väliset erot nähdään graafisesti. Kehitetyt sovellukset parantavat käyttöliittymien suunnitteluprosessia tehostaen virheiden etsintää, optimointia ja versionhallintaa. Visualisointityökalun edut ovat merkittävät, koska käyttöliittymäsovellusten toiminta on havaittavissa suunnittelu- kaavioista reaaliaikaisen simuloinnin yhteydessä. Näin virheet ovat välittömästi paikannettavissa. Lisäksi työkalua voidaan hyödyntää kaavioita optimoitaessa, jolloin sovellusten kokoja muistintarve pienenee. Graafinen vertailutyökalu tuo edun rinnakkaiseen ohjelmistosuunnitteluun. Eri versioisten suunnittelukaavioiden erot ovat nähtävissä suoraan kaaviosta manuaalisen vertailun sijaan. Molemmat työkalut otettiin onnistuneesti käyttöön NMP:llä vuoden 2001 alussa.
Resumo:
Among instruments measuring spiritual well-being, the Functional Assessment of Chronic Illness Therapy-Spiritual well-being (FACIT-Sp-12) is the most widely used instrument in research. It has been validated in patients suffering from cancer or HIV/AIDS, but has rarely been used in elderly patients. The objectives of this study were to determine the psychometric properties and suitability of the FACIT-Spto assess spiritual well-being in hospitalized elderly patients. This cross-sectional study uses a mixed method approach. Subjects were patients (N = 208), aged 65 years and older, consecutively admitted in post-acute rehabilitation. Psychometric properties of the FACITSp were investigated. The internal structure of the FACIT-Sp (factor structure and internal consistency) was assessed. Convergent validity of the FACIT-Sp was assessed using the Spiritual Distress Assessment Tool (SDAT), the question "Are you at peace?" and the Geriatric Depression Scale (GDS). Predictive validity was assessed using length of stay (LOS) and discharge destination. Understanding and interpretation of FACIT-Sp items were consecutively assessed in a sub-sample of 135 patients. Results show that FACIT-Sp scores ranged from 7 to 46 (mean 29.6 ± 7.8); 23.1% of the patients had high spiritual well-being. Cronbach's α was g ood ( 0.85). Item-to-total correlations were all significant (0.34 to 0.73). Principal component analyses performed with 2 or 3 factors were only moderately consistent with previous work. FACIT-Sp correlated with SDAT, "Are you at peace?" and GDS (Rho = −0.45, P < 0.001; 0.51, P < 0.001 and −0.38, P < 0.001). No association was found with LOS or discharge destination. Spontaneous comments about one or more FACIT-Sp items were made by 97/135 (71.9%). Specifically, items that address purpose and meaning in life were frequently found difficult to answer. Analyses suggest that the FACIT-Sp may underestimate spiritual well-being in older patients. In conclusion, despite having acceptable psychometric properties, the FACIT-Sp presents limitations for measurement of spiritual well-being in hospitalized elderly patients.
Resumo:
Concentration gradients provide spatial information for tissue patterning and cell organization, and their robustness under natural fluctuations is an evolutionary advantage. In rod-shaped Schizosaccharomyces pombe cells, the DYRK-family kinase Pom1 gradients control cell division timing and placement. Upon dephosphorylation by a Tea4-phosphatase complex, Pom1 associates with the plasma membrane at cell poles, where it diffuses and detaches upon auto-phosphorylation. Here, we demonstrate that Pom1 auto-phosphorylates intermolecularly, both in vitro and in vivo, which confers robustness to the gradient. Quantitative imaging reveals this robustness through two system's properties: The Pom1 gradient amplitude is inversely correlated with its decay length and is buffered against fluctuations in Tea4 levels. A theoretical model of Pom1 gradient formation through intermolecular auto-phosphorylation predicts both properties qualitatively and quantitatively. This provides a telling example where gradient robustness through super-linear decay, a principle hypothesized a decade ago, is achieved through autocatalysis. Concentration-dependent autocatalysis may be a widely used simple feedback to buffer biological activities.
Resumo:
BACKGROUND: Lung clearance index (LCI), a marker of ventilation inhomogeneity, is elevated early in children with cystic fibrosis (CF). However, in infants with CF, LCI values are found to be normal, although structural lung abnormalities are often detectable. We hypothesized that this discrepancy is due to inadequate algorithms of the available software package. AIM: Our aim was to challenge the validity of these software algorithms. METHODS: We compared multiple breath washout (MBW) results of current software algorithms (automatic modus) to refined algorithms (manual modus) in 17 asymptomatic infants with CF, and 24 matched healthy term-born infants. The main difference between these two analysis methods lies in the calculation of the molar mass differences that the system uses to define the completion of the measurement. RESULTS: In infants with CF the refined manual modus revealed clearly elevated LCI above 9 in 8 out of 35 measurements (23%), all showing LCI values below 8.3 using the automatic modus (paired t-test comparing the means, P < 0.001). Healthy infants showed normal LCI values using both analysis methods (n = 47, paired t-test, P = 0.79). The most relevant reason for false normal LCI values in infants with CF using the automatic modus was the incorrect recognition of the end-of-test too early during the washout. CONCLUSION: We recommend the use of the manual modus for the analysis of MBW outcomes in infants in order to obtain more accurate results. This will allow appropriate use of infant lung function results for clinical and scientific purposes. Pediatr Pulmonol. 2015; 50:970-977. © 2015 Wiley Periodicals, Inc.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
This article introduces EsPal: a Web-accessible repository containing a comprehensive set of properties of Spanish words. EsPal is based on an extensible set of data sources, beginning with a 300 million token written database and a 460 million token subtitle database. Properties available include word frequency, orthographic structure and neighborhoods, phonological structure and neighborhoods, and subjective ratings such as imageability. Subword structure properties are also available in terms of bigrams and trigrams, bi-phones, and bi-syllables. Lemma and part-of-speech information and their corresponding frequencies are also indexed. The website enables users to either upload a set of words to receive their properties, or to receive a set of words matching constraints on the properties. The properties themselves are easily extensible and will be added over time as they become available. It is freely available from the following website: http://www.bcbl.eu/databases/espal
Resumo:
We present an algorithm for the computation of reducible invariant tori of discrete dynamical systems that is suitable for tori of dimensions larger than 1. It is based on a quadratically convergent scheme that approximates, at the same time, the Fourier series of the torus, its Floquet transformation, and its Floquet matrix. The Floquet matrix describes the linearization of the dynamics around the torus and, hence, its linear stability. The algorithm presents a high degree of parallelism, and the computational effort grows linearly with the number of Fourier modes needed to represent the solution. For these reasons it is a very good option to compute quasi-periodic solutions with several basic frequencies. The paper includes some examples (flows) to show the efficiency of the method in a parallel computer. In these flows we compute invariant tori of dimensions up to 5, by taking suitable sections.
Resumo:
The CORNISH project is the highest resolution radio continuum survey of the Galactic plane to date. It is the 5 GHz radio continuum part of a series of multi-wavelength surveys that focus on the northern GLIMPSE region (10° < l < 65°), observed by the Spitzer satellite in the mid-infrared. Observations with the Very Large Array in B and BnA configurations have yielded a 1.''5 resolution Stokes I map with a root mean square noise level better than 0.4 mJy beam 1. Here we describe the data-processing methods and data characteristics, and present a new, uniform catalog of compact radio emission. This includes an implementation of automatic deconvolution that provides much more reliable imaging than standard CLEANing. A rigorous investigation of the noise characteristics and reliability of source detection has been carried out. We show that the survey is optimized to detect emission on size scales up to 14'' and for unresolved sources the catalog is more than 90% complete at a flux density of 3.9 mJy. We have detected 3062 sources above a 7σ detection limit and present their ensemble properties. The catalog is highly reliable away from regions containing poorly sampled extended emission, which comprise less than 2% of the survey area. Imaging problems have been mitigated by down-weighting the shortest spacings and potential artifacts flagged via a rigorous manual inspection with reference to the Spitzer infrared data. We present images of the most common source types found: H II regions, planetary nebulae, and radio galaxies. The CORNISH data and catalog are available online at http://cornish.leeds.ac.uk.
Resumo:
UNLABELLED: Cleavage of influenza virus hemagglutinin (HA) by host cell proteases is necessary for viral activation and infectivity. In humans and mice, members of the type II transmembrane protease family (TTSP), e.g., TMPRSS2, TMPRSS4, and TMPRSS11d (HAT), have been shown to cleave influenza virus HA for viral activation and infectivityin vitro Recently, we reported that inactivation of a single HA-activating protease gene,Tmprss2, in knockout mice inhibits the spread of H1N1 influenza viruses. However, after infection ofTmprss2knockout mice with an H3N2 influenza virus, only a slight increase in survival was observed, and mice still lost body weight. In this study, we investigated an additional trypsin-like protease, TMPRSS4. Both TMPRSS2 and TMPRSS4 are expressed in the same cell types of the mouse lung. Deletion ofTmprss4alone in knockout mice does not protect them from body weight loss and death upon infection with H3N2 influenza virus. In contrast,Tmprss2(-/-)Tmprss4(-/-)double-knockout mice showed a remarkably reduced virus spread and lung pathology, in addition to reduced body weight loss and mortality. Thus, our results identified TMPRSS4 as a second host cell protease that, in addition to TMPRSS2, is able to activate the HA of H3N2 influenza virusin vivo IMPORTANCE: Influenza epidemics and recurring pandemics are responsible for significant global morbidity and mortality. Due to high variability of the virus genome, resistance to available antiviral drugs is frequently observed, and new targets for treatment of influenza are needed. Host cell factors essential for processing of the virus hemagglutinin represent very suitable drug targets because the virus is dependent on these host factors for replication. We reported previously thatTmprss2-deficient mice are protected against H1N1 virus infections, but only marginal protection against H3N2 virus infections was observed. Here we show that deletion of two host protease genes,Tmprss2andTmprss4, strongly reduced viral spread as well as lung pathology and resulted in increased survival after H3N2 virus infection. Thus, TMPRSS4 represents another host cell factor that is involved in cleavage activation of H3N2 influenza virusesin vivo.
Resumo:
A new approach to mammographic mass detection is presented in this paper. Although different algorithms have been proposed for such a task, most of them are application dependent. In contrast, our approach makes use of a kindred topic in computer vision adapted to our particular problem. In this sense, we translate the eigenfaces approach for face detection/classification problems to a mass detection. Two different databases were used to show the robustness of the approach. The first one consisted on a set of 160 regions of interest (RoIs) extracted from the MIAS database, being 40 of them with confirmed masses and the rest normal tissue. The second set of RoIs was extracted from the DDSM database, and contained 196 RoIs containing masses and 392 with normal, but suspicious regions. Initial results demonstrate the feasibility of using such approach with performances comparable to other algorithms, with the advantage of being a more general, simple and cost-effective approach
Resumo:
Potentiometric amalgam electrodes of lead, cadmium, and zinc are proposed to study the complexation properties of commercial and river sediment humic acids. The copper complexation properties of both humic acids were studied in parallel using the solid membrane copper ion-selective electrode (Cu-ISE). The complexing capacity and the averaged conditional stability constants were determined at pH 6.00 ± 0.05 in medium of 2x10-2 mol L-1 sodium nitrate, using the Scatchard method. The lead and cadmium amalgam electrodes presented a Nernstian behavior from 1x10-5 to 1x10-3 moles L-1 of total metal concentration, permitting to perform the complexation studies using humic acid concentrations around of 20 to 30 mg L-1, that avoids colloidal aggregation. The zinc amalgam electrode showed a subnernstian linear response in the same range of metal concentrations. The Scatchard graphs for both humic acids suggested two classes of binding sites for lead and copper and one class of binding site for zinc and cadmium.
Resumo:
Peer-reviewed
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.
Resumo:
The purpose of this work was to study the effect of aspen and alder on birch cooking and the quality of the pulp produced. Three different birch kraft pulps were studied. As a reference, pure aspen and alder were included. The laboratory trials were done at the UPM Research Centre in Lappeenranta, Finland. The materials used were birch, aspen and alder mill chips that were collected around the area of South-Carelia in Finland. The chips used in the study were pulped using a standard kraft process. The pulps including birch fibres were ECF-bleached at laboratory scale to a target brightness of 85 %. The bleached pulps were beaten at low consistency by a laboratory Voith Sulzer refiner and tested for optical and physical properties. The theoretical part is a study of hardwoods that takes into accounts the differences between birch, aspen and alder. Major sub-areas were fibre and paper-technical properties as well as chemical composition and their influence on the different properties. The pulp properties of birch, aspen and alder found in previous studies were reported. Russian hardwood forest resources were also investigated. The fundamentals of kraft pulping and bleaching were studied at the end of theoretical part. The major effect of replacing birch with aspen and alder was the deterioration (lowering) of tensile and tear strengths. In other words, addition of aspen and alder to a birch furnish reduced strength properties. The reinforcement ability of the tested pulps was the following: 100 % birch > 80 % birch, 20 % aspen > 70 % birch, 20 % aspen, 10 % alder. The second thing noted was that blending of birch together with aspen and alder give better smoothness, optical properties and also formation. It can be concluded, that replacement of birch with alder during cooking by more than 10 % can negatively affect on the paper-technical properties of birch pulp. Mixing pure birch and aspen pulps would be more beneficial when producing printing paper made from chemical pulp.
Resumo:
Signal processing methods based on the combined use of the continuous wavelet transform (CWT) and zero-crossing technique were applied to the simultaneous spectrophotometric determination of perindopril (PER) and indapamide (IND) in tablets. These signal processing methods do not require any priory separation step. Initially, various wavelet families were tested to identify the optimum signal processing giving the best recovery results. From this procedure, the Haar and Biorthogonal1.5 continuous wavelet transform (HAAR-CWT and BIOR1.5-CWT, respectively) were found suitable for the analysis of the related compounds. After transformation of the absorbance vectors by using HAAR-CWT and BIOR1.5-CWT, the CWT-coefficients were drawn as a graph versus wavelength and then the HAAR-CWT and BIOR1.5-CWT spectra were obtained. Calibration graphs for PER and IND were obtained by measuring the CWT amplitudes at 231.1 and 291.0 nm in the HAAR-CWT spectra and at 228.5 and 246.8 nm in BIOR1.5-CWT spectra, respectively. In order to compare the performance of HAAR-CWT and BIOR1.5-CWT approaches, derivative spectrophotometric (DS) method and HPLC as comparison methods, were applied to the PER-IND samples. In this DS method, first derivative absorbance values at 221.6 for PER and 282.7 nm for IND were used to obtain the calibration graphs. The validation of the CWT and DS signal processing methods was carried out by using the recovery study and standard addition technique. In the following step, these methods were successfully applied to the commercial tablets containing PER and IND compounds and good accuracy and precision were reported for the experimental results obtained by all proposed signal processing methods.