928 resultados para Linear array
Resumo:
The choice network revenue management (RM) model incorporates customer purchase behavioras customers purchasing products with certain probabilities that are a function of the offeredassortment of products, and is the appropriate model for airline and hotel network revenuemanagement, dynamic sales of bundles, and dynamic assortment optimization. The underlyingstochastic dynamic program is intractable and even its certainty-equivalence approximation, inthe form of a linear program called Choice Deterministic Linear Program (CDLP) is difficultto solve in most cases. The separation problem for CDLP is NP-complete for MNL with justtwo segments when their consideration sets overlap; the affine approximation of the dynamicprogram is NP-complete for even a single-segment MNL. This is in contrast to the independentclass(perfect-segmentation) case where even the piecewise-linear approximation has been shownto be tractable. In this paper we investigate the piecewise-linear approximation for network RMunder a general discrete-choice model of demand. We show that the gap between the CDLP andthe piecewise-linear bounds is within a factor of at most 2. We then show that the piecewiselinearapproximation is polynomially-time solvable for a fixed consideration set size, bringing itinto the realm of tractability for small consideration sets; small consideration sets are a reasonablemodeling tradeoff in many practical applications. Our solution relies on showing that forany discrete-choice model the separation problem for the linear program of the piecewise-linearapproximation can be solved exactly by a Lagrangian relaxation. We give modeling extensionsand show by numerical experiments the improvements from using piecewise-linear approximationfunctions.
Resumo:
Polynomial constraint solving plays a prominent role in several areas of hardware and software analysis and verification, e.g., termination proving, program invariant generation and hybrid system verification, to name a few. In this paper we propose a new method for solving non-linear constraints based on encoding the problem into an SMT problem considering only linear arithmetic. Unlike other existing methods, our method focuses on proving satisfiability of the constraints rather than on proving unsatisfiability, which is more relevant in several applications as we illustrate with several examples. Nevertheless, we also present new techniques based on the analysis of unsatisfiable cores that allow one to efficiently prove unsatisfiability too for a broad class of problems. The power of our approach is demonstrated by means of extensive experiments comparing our prototype with state-of-the-art tools on benchmarks taken both from the academic and the industrial world.
Resumo:
Methods for the extraction of features from physiological datasets are growing needs as clinical investigations of Alzheimer’s disease (AD) in large and heterogeneous population increase. General tools allowing diagnostic regardless of recording sites, such as different hospitals, are essential and if combined to inexpensive non-invasive methods could critically improve mass screening of subjects with AD. In this study, we applied three state of the art multiway array decomposition (MAD) methods to extract features from electroencephalograms (EEGs) of AD patients obtained from multiple sites. In comparison to MAD, spectral-spatial average filter (SSFs) of control and AD subjects were used as well as a common blind source separation method, algorithm for multiple unknown signal extraction (AMUSE). We trained a feed-forward multilayer perceptron (MLP) to validate and optimize AD classification from two independent databases. Using a third EEG dataset, we demonstrated that features extracted from MAD outperformed features obtained from SSFs AMUSE in terms of root mean squared error (RMSE) and reaching up to 100% of accuracy in test condition. We propose that MAD maybe a useful tool to extract features for AD diagnosis offering great generalization across multi-site databases and opening doors to the discovery of new characterization of the disease.
Resumo:
Biometric system performance can be improved by means of data fusion. Several kinds of information can be fused in order to obtain a more accurate classification (identification or verification) of an input sample. In this paper we present a method for computing the weights in a weighted sum fusion for score combinations, by means of a likelihood model. The maximum likelihood estimation is set as a linear programming problem. The scores are derived from a GMM classifier working on a different feature extractor. Our experimental results assesed the robustness of the system in front a changes on time (different sessions) and robustness in front a change of microphone. The improvements obtained were significantly better (error bars of two standard deviations) than a uniform weighted sum or a uniform weighted product or the best single classifier. The proposed method scales computationaly with the number of scores to be fussioned as the simplex method for linear programming.
Resumo:
This paper deals with non-linear transformations for improving the performance of an entropy-based voice activity detector (VAD). The idea to use a non-linear transformation has already been applied in the field of speech linear prediction, or linear predictive coding (LPC), based on source separation techniques, where a score function is added to classical equations in order to take into account the true distribution of the signal. We explore the possibility of estimating the entropy of frames after calculating its score function, instead of using original frames. We observe that if the signal is clean, the estimated entropy is essentially the same; if the signal is noisy, however, the frames transformed using the score function may give entropy that is different in voiced frames as compared to nonvoiced ones. Experimental evidence is given to show that this fact enables voice activity detection under high noise, where the simple entropy method fails.
Resumo:
This special issue aims to cover some problems related to non-linear and nonconventional speech processing. The origin of this volume is in the ISCA Tutorial and Research Workshop on Non-Linear Speech Processing, NOLISP’09, held at the Universitat de Vic (Catalonia, Spain) on June 25–27, 2009. The series of NOLISP workshops started in 2003 has become a biannual event whose aim is to discuss alternative techniques for speech processing that, in a sense, do not fit into mainstream approaches. A selected choice of papers based on the presentations delivered at NOLISP’09 has given rise to this issue of Cognitive Computation.
Resumo:
It is well known the relationship between source separation and blind deconvolution: If a filtered version of an unknown i.i.d. signal is observed, temporal independence between samples can be used to retrieve the original signal, in the same manner as spatial independence is used for source separation. In this paper we propose the use of a Genetic Algorithm (GA) to blindly invert linear channels. The use of GA is justified in the case of small number of samples, where other gradient-like methods fails because of poor estimation of statistics.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed an upscaling procedure based on a Bayesian sequential simulation approach. This method is then applied to the stochastic integration of low-resolution, regional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this upscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the upscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
A simple method using liquid chromatography-linear ion trap mass spectrometry for simultaneous determination of testosterone glucuronide (TG), testosterone sulfate (TS), epitestosterone glucuronide (EG) and epitestosterone sulfate (ES) in urine samples was developed. For validation purposes, a urine containing no detectable amount of TG, TS and EG was selected and fortified with steroid conjugate standards. Quantification was performed using deuterated testosterone conjugates to correct for ion suppression/enhancement during ESI. Assay validation was performed in terms of lower limit of detection (1-3ng/mL), recovery (89-101%), intraday precision (2.0-6.8%), interday precision (3.4-9.6%) and accuracy (101-103%). Application of the method to short-term stability testing of urine samples at temperature ranging from 4 to 37 degrees C during a time-storage of a week lead to the conclusion that addition of sodium azide (10mg/mL) is required for preservation of the analytes.
Resumo:
The objective of this study was to adapt a nonlinear model (Wang and Engel - WE) for simulating the phenology of maize (Zea mays L.), and to evaluate this model and a linear one (thermal time), in order to predict developmental stages of a field-grown maize variety. A field experiment, during 2005/2006 and 2006/2007 was conducted in Santa Maria, RS, Brazil, in two growing seasons, with seven sowing dates each. Dates of emergence, silking, and physiological maturity of the maize variety BRS Missões were recorded in six replications in each sowing date. Data collected in 2005/2006 growing season were used to estimate the coefficients of the two models, and data collected in the 2006/2007 growing season were used as independent data set for model evaluations. The nonlinear WE model accurately predicted the date of silking and physiological maturity, and had a lower root mean square error (RMSE) than the linear (thermal time) model. The overall RMSE for silking and physiological maturity was 2.7 and 4.8 days with WE model, and 5.6 and 8.3 days with thermal time model, respectively.
Resumo:
Polyphosphate (iPOP) is a linear polymer of orthophosphate units linked together by high energy phosphoanhydride bonds. It is found in all organisms, localized in organelles called acidocalcisomes and ranges from a few to few hundred monomers in length. iPOP has been found to play a vast array of roles in all organisms, including phosphate and energy metabolism, regulation of enzymes, virulence, pathogenicity, bone remodelling and blood clotting, among many others. Recently it was found that iPOP levels were increased in myeloma cells. The growing interest in iPOP in human cell lines makes it an interesting molecule to study. However, not much is known about its metabolism in eukaryotes. Acidocalcisomes are electron dense, acidic organelles that belong to the group of Lysosome Related Organelles (LROs). The conservation of acidocalcisomes among all kingdoms of life is suggestive of their important roles for the organisms. However, they are difficult to analyse because of limited biochemical tools for investigation. Yeast vacuoles present remarkable similarities to acidocalcisomes in terms of their physiological and structural features, including synthesis and storage of iPOP, which make them an ideal candidate to study biological processes which are shared between vacuoles and acidocalcisomes. The availability of tools for genetic manipulation and isolation of vacuoles makes yeast a candidate of choice for the characterization of iPOP synthesis in eukaryotes. Our group has identified the Vacuolar Transporter Chaperone (VTC) complex as iPOP polymerase and identified the catalytic subunit (Vtc4). The goal of my study was to characterize the process of iPOP synthesis by isolated vacuoles and to reconstitute iPOP synthesis in liposomes. The first step was to develop a method for monitoring iPOP by isolated vacuoles over time and comparing it with previously known methods. Next, a detailed characterization was performed to determine the modulators of the process, both for intact as well as solubilized vacuoles. Finally, attempts were made to purify the VTC complex and reconstitute it in liposomes. A parallel line of study was the translocation and storage of synthesized iPOP in the lumen of the vacuoles. As a result of this study, it is possible to determine distinct pools of iPOP- inside and outside the vacuolar lumen. Additionally, I establish that the vacuolar lysate withstands harsh steps during reconstitution on liposomes and retains iPOP synthesizing activity. The next steps will be purification of the intact VTC complex and its structure determination by cryo-electron microscopy. - Les organismes vivants sont composés d'une ou plusieurs cellules responsables des processus biologiques élémentaires tels que la digestion, la respiration, la synthèse et la reproduction. Leur environnement interne est en équilibre et ils réalisent un très grand nombre de réactions chimiques et biochimiques pour maintenir cet équilibre. A différents compartiments cellulaires, ou organelles, sont attribuées des tâches spécifiques pour maintenir les cellules en vie. L'étude de ces fonctions permet une meilleure compréhension de la vie et des organismes vivants. De nombreux processus sont bien connus et caractérisés mais d'autres nécessitent encore des investigations détaillées. L'un de ces processus est le métabolisme des polyphosphates. Ces molécules sont des polymères linéaires de phosphate inorganique dont la taille peut varier de quelques dizaines à quelques centaines d'unités élémentaires. Ils sont présents dans tous les organismes, des bactéries à l'homme. Ils sont localisés principalement dans des compartiments cellulaires appelés acidocalcisomes, des organelles acides observés en microscopie électronique comme des structures denses aux électrons. Les polyphosphates jouent un rôle important dans le stockage et le métabolisme de l'énergie, la réponse au stress, la virulence, la pathogénicité et la résistance aux drogues. Chez l'homme, ils sont impliqués dans la coagulation du sang et le remodelage osseux. De nouvelles fonctions biologiques des polyphosphates sont encore découvertes, ce qui accroît l'intérêt des chercheurs pour ces molécules. Bien que des progrès considérables ont été réalisés afin de comprendre la fonction des polyphosphates chez les bactéries, ce qui concerne la synthèse, le stockage et la dégradation des polyphosphates chez les eucaryotes est mal connu. Les vacuoles de la levure Saccharomyces cerevisiae sont similaires aux acidocalcisomes des organismes supérieurs en termes de structure et de fonction. Les acidocalcisomes sont difficiles à étudier car il n'existe que peu d'outils génétiques et biochimiques qui permettent leur caractérisation. En revanche, les vacuoles peuvent être aisément isolées des cellules vivantes et manipulées génétiquement. Les vacuoles comme les acidocalcisomes synthétisent et stockent les polyphosphates. Ainsi, les découvertes faites grâce aux vacuoles de levures peuvent être extrapolées aux acidocalcisomes des organismes supérieurs. Le but de mon projet était de caractériser la synthèse des polyphosphates par des vacuoles isolées. Au cours de mon travail de thèse, j'ai mis au point une méthode de mesure de la synthèse des polyphosphates par des organelles purifés. Ensuite, j'ai identifié des composés qui modulent la réaction enzymatique lorsque celle-ci a lieu dans la vacuole ou après solubilisation de l'organelle. J'ai ainsi pu mettre en évidence deux groupes distincts de polyphosphates dans le système : ceux au-dehors de la vacuole et ceux en-dedans de l'organelle. Cette observation suggère donc très fortement que les vacuoles non seulement synthétisent les polyphosphates mais aussi transfère les molécules synthétisées de l'extérieur vers l'intérieur de l'organelle. Il est très vraisemblable que les vacuoles régulent le renouvellement des polyphosphates qu'elles conservent, en réponse à des signaux cellulaires. Des essais de purification de l'enzyme synthétisant les polyphosphates ainsi que sa reconstitution dans des liposomes ont également été entrepris. Ainsi, mon travail présente de nouveaux aspects de la synthèse des polyphosphates chez les eucaryotes et les résultats devraient encourager l'élucidation de mécanismes similaires chez les organismes supérieurs. - Les polyphosphates (iPOP) sont des polymères linéaires de phosphates inorganiques liés par des liaisons phosphoanhydres de haute énergie. Ces molécules sont présentes dans tous les organismes et localisées dans des compartiments cellulaires appelés acidocalcisomes. Elles varient en taille de quelques dizaines à quelques centaines d'unités phosphate. Des fonctions nombreuses et variées ont été attribuées aux iPOP dont un rôle dans les métabolismes de l'énergie et du phosphate, dans la régulation d'activités enzymatiques, la virulence, la pathogénicité, le remodelage osseux et la coagulation sanguine. Il a récemment été montré que les cellules de myélome contiennent une grande quantité de iPOP. Il y donc un intérêt croissant pour les iPOP dans les lignées cellulaires humaines. Cependant, très peu d'informations sur le métabolisme des iPOP chez les eucaryotes sont disponibles. Les acidocalcisomes sont des compartiments acides et denses aux électrons. Ils font partie du groupe des organelles similaires aux lysosomes (LROs pour Lysosome Related Organelles). Le fait que les acidocalcisomes soient conservés dans tous les règnes du vivant montrent l'importance de ces compartiments pour les organismes. Cependant, l'analyse de ces organelles est rendue difficile par l'existence d'un nombre limité d'outils biochimiques permettant leur caractérisation. Les vacuoles de levures possèdent des aspects structuraux et physiologiques très similaires à ceux des acidocalcisomes. Par exemple, ils synthétisent et gardent en réserve les iPOP. Ceci fait des vacuoles de levure un modèle idéal pour l'étude de processus biologiques conservés chez les vacuoles et les acidocalcisomes. De plus, la levure est un organisme de choix pour l'étude de la synthèse des iPOP compte-tenu de l'existence de nombreux outils génétiques et la possibilité d'isoler des vacuoles fonctionnelles. Notre groupe a identifié le complexe VTC (Vacuole transporter Chaperone) comme étant responsable de la synthèse des iPOP et la sous-unité Vtc4p comme celle possédant l'activité catalytique. L'objectif de cette étude était de caractériser le processus de synthèse des iPOP en utilisant des vacuoles isolées et de reconstituer la synthèse des iPOP dans des liposomes. La première étape a consisté en la mise au point d'un dosage permettant la mesure de la quantité de iPOP synthétisés par les organelles isolés en fonction du temps. Cette nouvelle méthode a été comparée aux méthodes décrites précédemment dans la littérature. Ensuite, la caractérisation détaillée du processus a permis d'identifier des composés modulateurs de la réaction à la fois pour des vacuoles intactes et des vacuoles solubilisées. Enfin, des essais de purification du complexe VTC et sa reconstitution dans des liposomes ont été entrepris. De façon parallèle, une étude sur la translocation et le stockage des iPOP dans le lumen des vacuoles a été menée. Il a ainsi été possible de mettre en évidence différents groupes de iPOP : les iPOP localisés à l'intérieur et ceux localisés à l'extérieur des vacuoles isolées. De plus, nous avons observé que le lysat vacuolaire n'est pas détérioré par les étapes de reconstitution dans les liposomes et conserve l'activité de synthèse des iPOP. Les prochaines étapes consisteront en la purification du complexe intact et de la détermination de sa structure par cryo-microscopie électronique.
Análise genética de escores de avaliação visual de bovinos com modelos bayesianos de limiar e linear
Resumo:
O objetivo deste trabalho foi comparar as estimativas de parâmetros genéticos obtidas em análises bayesianas uni-característica e bi-característica, em modelo animal linear e de limiar, considerando-se as características categóricas morfológicas de bovinos da raça Nelore. Os dados de musculosidade, estrutura física e conformação foram obtidos entre 2000 e 2005, em 3.864 animais de 13 fazendas participantes do Programa Nelore Brasil. Foram realizadas análises bayesianas uni e bi-características, em modelos de limiar e linear. De modo geral, os modelos de limiar e linear foram eficientes na estimação dos parâmetros genéticos para escores visuais em análises bayesianas uni-características. Nas análises bi-características, observou-se que: com utilização de dados contínuos e categóricos, o modelo de limiar proporcionou estimativas de correlação genética de maior magnitude do que aquelas do modelo linear; e com o uso de dados categóricos, as estimativas de herdabilidade foram semelhantes. A vantagem do modelo linear foi o menor tempo gasto no processamento das análises. Na avaliação genética de animais para escores visuais, o uso do modelo de limiar ou linear não influenciou a classificação dos animais, quanto aos valores genéticos preditos, o que indica que ambos os modelos podem ser utilizados em programas de melhoramento genético.
Resumo:
Rapid amplification of cDNA ends (RACE) is a widely used approach for transcript identification. Random clone selection from the RACE mixture, however, is an ineffective sampling strategy if the dynamic range of transcript abundances is large. To improve sampling efficiency of human transcripts, we hybridized the products of the RACE reaction onto tiling arrays and used the detected exons to delineate a series of reverse-transcriptase (RT)-PCRs, through which the original RACE transcript population was segregated into simpler transcript populations. We independently cloned the products and sequenced randomly selected clones. This approach, RACEarray, is superior to direct cloning and sequencing of RACE products because it specifically targets new transcripts and often results in overall normalization of transcript abundance. We show theoretically and experimentally that this strategy leads indeed to efficient sampling of new transcripts, and we investigated multiplexing the strategy by pooling RACE reactions from multiple interrogated loci before hybridization.
Resumo:
We report on two patients with de novo subtelomeric terminal deletion of chromosome 6p. Patient 1 is an 8-month-old female born with normal growth parameters, typical facial features of 6pter deletion, bilateral corectopia, and protruding tongue. She has severe developmental delay, profound bilateral neurosensory deafness, poor visual contact, and hypsarrhythmia since the age of 6 months. Patient 2 is a 5-year-old male born with normal growth parameters and unilateral hip dysplasia; he has a characteristic facial phenotype, bilateral embryotoxon, and moderate mental retardation. Further characterization of the deletion, using high-resolution array comparative genomic hybridization (array-CGH; Agilent Human Genome kit 244 K), revealed that Patient 1 has a 8.1 Mb 6pter-6p24.3 deletion associated with a contiguous 5.8 Mb 6p24.3-6p24.1 duplication and Patient 2 a 5.7 Mb 6pter-6p25.1 deletion partially overlapping with that of Patient 1. Complementary FISH and array analysis showed that the inv del dup(6) in Patient 1 originated de novo. Our results demonstrate that simple rearrangements are often more complex than defined by standard techniques. We also discuss genotype-phenotype correlations including previously reported cases of deletion 6p.
Resumo:
Global positioning systems (GPS) offer a cost-effective and efficient method to input and update transportation data. The spatial location of objects provided by GPS is easily integrated into geographic information systems (GIS). The storage, manipulation, and analysis of spatial data are also relatively simple in a GIS. However, many data storage and reporting methods at transportation agencies rely on linear referencing methods (LRMs); consequently, GPS data must be able to link with linear referencing. Unfortunately, the two systems are fundamentally incompatible in the way data are collected, integrated, and manipulated. In order for the spatial data collected using GPS to be integrated into a linear referencing system or shared among LRMs, a number of issues need to be addressed. This report documents and evaluates several of those issues and offers recommendations. In order to evaluate the issues associated with integrating GPS data with a LRM, a pilot study was created. To perform the pilot study, point features, a linear datum, and a spatial representation of a LRM were created for six test roadway segments that were located within the boundaries of the pilot study conducted by the Iowa Department of Transportation linear referencing system project team. Various issues in integrating point features with a LRM or between LRMs are discussed and recommendations provided. The accuracy of the GPS is discussed, including issues such as point features mapping to the wrong segment. Another topic is the loss of spatial information that occurs when a three-dimensional or two-dimensional spatial point feature is converted to a one-dimensional representation on a LRM. Recommendations such as storing point features as spatial objects if necessary or preserving information such as coordinates and elevation are suggested. The lack of spatial accuracy characteristic of most cartography, on which LRM are often based, is another topic discussed. The associated issues include linear and horizontal offset error. The final topic discussed is some of the issues in transferring point feature data between LRMs.