935 resultados para ElGamal, CZK, Multiple discrete logarithm assumption, Extended linear algebra
Resumo:
In this paper we proposed a composite depth of penetration (DOP) approach to excluding bottom reflectance in mapping water quality parameters from Landsat thematic mapper (TM) data in the shallow coastal zone of Moreton Bay, Queensland, Australia. Three DOPs were calculated from TM1, TM2 and TM3, in conjunction with bathymetric data, at an accuracy ranging from +/-5% to +/-23%. These depths were used to segment the image into four DOP zones. Sixteen in situ water samples were collected concurrently with the recording of the satellite image. These samples were used to establish regression models for total suspended sediment (TSS) concentration and Secchi depth with respect to a particular DOP zone. Containing identical bands and their transformations for both parameters, the models are linear for TSS concentration, logarithmic for Secchi depth. Based on these models, TSS concentration and Secchi depth were mapped from the satellite image in respective DOP zones. Their mapped patterns are consistent with the in situ observed ones. Spatially, overestimation and underestimation of the parameters are restricted to localised areas but related to the absolute value of the parameters. The mapping was accomplished more accurately using multiple DOP zones than using a single zone in shallower areas. The composite DOP approach enables the mapping to be extended to areas as shallow as <3 m. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Conflicting findings regarding the ability of people with schizophrenia to maintain and update semantic contexts have been due, arguably, to vagaries within the experimental design employed (e.g. whether strongly or remotely associated prime-target pairs have been used, what delay between the prime and the target was employed, and what proportion of related prime-target pairs appeared) or to characteristics of the participant cohort (e.g. medication status, chronicity of illness). The aim of the present study was to examine how people with schizophrenia maintain and update contextual information over an extended temporal window by using multiple primes that were either remotely associated or unrelated to the target. Fourteen participants with schizophrenia and 12 healthy matched controls were compared across two stimulus onset asynchronies (SOAs) (short and long) and two relatedness proportions (RP) (high and low) in a crossed design. Analysis of variance statistics revealed significant two- and three-way interactions between Group and SOA, Group and Condition, SOA and RP, and Group, SOA and RP. The participants with schizophrenia showed evidence of enhanced remote priming at the short SOA and low RP, combined with a reduction in the time course over which context could be maintained. There was some sensitivity to biasing contextual information at the short SOA, although the mechanism over which context served to update information appeared to be different from that in the controls. The participants with schizophrenia showed marked performance decrements at the long SOA (both low and high RP). Indices of remote priming at the short (but not the long) SOA correlated with both clinical ratings of thought disorder and with increasing length of illness. The results support and extend the hypothesis that schizophrenia is associated with concurrent increases in tonic dopamine activity and decreases in phasic dopamine activity. (C) 2004 Elsevier Ireland Ltd. All rights reserved.
Resumo:
Objective: To examine the relationship between the auditory brain-stem response (ABR) and its reconstructed waveforms following discrete wavelet transformation (DWT), and to comment on the resulting implications for ABR DWT time-frequency analysis. Methods: ABR waveforms were recorded from 120 normal hearing subjects at 90, 70, 50, 30, 10 and 0 dBnHL, decomposed using a 6 level discrete wavelet transformation (DWT), and reconstructed at individual wavelet scales (frequency ranges) A6, D6, D5 and D4. These waveforms were then compared for general correlations, and for patterns of change due to stimulus level, and subject age, gender and test ear. Results: The reconstructed ABR DWT waveforms showed 3 primary components: a large-amplitude waveform in the low-frequency A6 scale (0-266.6 Hz) with its single peak corresponding in latency with ABR waves III and V; a mid-amplitude waveform in the mid-frequency D6 scale (266.6-533.3 Hz) with its first 5 waves corresponding in latency to ABR waves 1, 111, V, VI and VII; and a small-amplitude, multiple-peaked waveform in the high-frequency D5 scale (533.3-1066.6 Hz) with its first 7 waves corresponding in latency to ABR waves 1, 11, 111, IV, V, VI and VII. Comparisons between ABR waves 1, 111 and V and their corresponding reconstructed ABR DWT waves showed strong correlations and similar, reliable, and statistically robust changes due to stimulus level and subject age, gender and test ear groupings. Limiting these findings, however, was the unexplained absence of a small number (2%, or 117/6720) of reconstructed ABR DWT waves, despite their corresponding ABR waves being present. Conclusions: Reconstructed ABR DWT waveforms can be used as valid time-frequency representations of the normal ABR, but with some limitations. In particular, the unexplained absence of a small number of reconstructed ABR DWT waves in some subjects, probably resulting from 'shift invariance' inherent to the DWT process, needs to be addressed. Significance: This is the first report of the relationship between the ABR and its reconstructed ABR DWT waveforms in a large normative sample. (C) 2004 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Resumo:
In this paper, we investigate the effects of potential models on the description of equilibria of linear molecules (ethylene and ethane) adsorption on graphitized thermal carbon black. GCMC simulation is used as a tool to give adsorption isotherms, isosteric heat of adsorption and the microscopic configurations of these molecules. At the heart of the GCMC are the potential models, describing fluid-fluid interaction and solid-fluid interaction. Here we studied the two potential models recently proposed in the literature, the UA-TraPPE and AUA4. Their impact in the description of adsorption behavior of pure components will be discussed. Mixtures of these components with nitrogen and argon are also studied. Nitrogen is modeled a two-site plus discrete charges while argon as a spherical particle. GCMC simulation is also used for generating simulation mixture isotherms. It is found that co-operation between species occurs when the surface is fractionally covered while competition is important when surface is fully loaded.
Resumo:
In this paper, a new control design method is proposed for stable processes which can be described using Hammerstein-Wiener models. The internal model control (IMC) framework is extended to accommodate multiple IMC controllers, one for each subsystem. The concept of passive systems is used to construct the IMC controllers which approximate the inverses of the subsystems to achieve dynamic control performance. The Passivity Theorem is used to ensure the closed-loop stability. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we present a novel indexing technique called Multi-scale Similarity Indexing (MSI) to index image's multi-features into a single one-dimensional structure. Both for text and visual feature spaces, the similarity between a point and a local partition's center in individual space is used as the indexing key, where similarity values in different features are distinguished by different scale. Then a single indexing tree can be built on these keys. Based on the property that relevant images have similar similarity values from the center of the same local partition in any feature space, certain number of irrelevant images can be fast pruned based on the triangle inequity on indexing keys. To remove the dimensionality curse existing in high dimensional structure, we propose a new technique called Local Bit Stream (LBS). LBS transforms image's text and visual feature representations into simple, uniform and effective bit stream (BS) representations based on local partition's center. Such BS representations are small in size and fast for comparison since only bit operation are involved. By comparing common bits existing in two BSs, most of irrelevant images can be immediately filtered. To effectively integrate multi-features, we also investigated the following evidence combination techniques-Certainty Factor, Dempster Shafer Theory, Compound Probability, and Linear Combination. Our extensive experiment showed that single one-dimensional index on multi-features improves multi-indices on multi-features greatly. Our LBS method outperforms sequential scan on high dimensional space by an order of magnitude. And Certainty Factor and Dempster Shafer Theory perform best in combining multiple similarities from corresponding multiple features.
Resumo:
A simple and effective method for purifying photoluminescent water-soluble surface passivated PbS nanocrystals has been developed. Centrifuging at high speeds removes PbS nanocrystals that exhibit strong red band edge photoluminescence from an original solution containing multiple nanocrystalline species with broad photoluminescence spectra. The ability to purify the PbS nanocrystals allowed two-photon photoluminescence spectroscopy to be performed on water-soluble PbS nanocrystals and be attributed to band edge recombination. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
A set of DCT domain properties for shifting and scaling by real amounts, and taking linear operations such as differentiation is described. The DCT coefficients of a sampled signal are subjected to a linear transform, which returns the DCT coefficients of the shifted, scaled and/or differentiated signal. The properties are derived by considering the inverse discrete transform as a cosine series expansion of the original continuous signal, assuming sampling in accordance with the Nyquist criterion. This approach can be applied in the signal domain, to give, for example, DCT based interpolation or derivatives. The same approach can be taken in decoding from the DCT to give, for example, derivatives in the signal domain. The techniques may prove useful in compressed domain processing applications, and are interesting because they allow operations from the continuous domain such as differentiation to be implemented in the discrete domain. An image matching algorithm illustrates the use of the properties, with improvements in computation time and matching quality.
Resumo:
The MFG test is a family-based association test that detects genetic effects contributing to disease in offspring, including offspring allelic effects, maternal allelic effects and MFG incompatibility effects. Like many other family-based association tests, it assumes that the offspring survival and the offspring-parent genotypes are conditionally independent provided the offspring is affected. However, when the putative disease-increasing locus can affect another competing phenotype, for example, offspring viability, the conditional independence assumption fails and these tests could lead to incorrect conclusions regarding the role of the gene in disease. We propose the v-MFG test to adjust for the genetic effects on one phenotype, e.g., viability, when testing the effects of that locus on another phenotype, e.g., disease. Using genotype data from nuclear families containing parents and at least one affected offspring, the v-MFG test models the distribution of family genotypes conditional on offspring phenotypes. It simultaneously estimates genetic effects on two phenotypes, viability and disease. Simulations show that the v-MFG test produces accurate genetic effect estimates on disease as well as on viability under several different scenarios. It generates accurate type-I error rates and provides adequate power with moderate sample sizes to detect genetic effects on disease risk when viability is reduced. We demonstrate the v-MFG test with HLA-DRB1 data from study participants with rheumatoid arthritis (RA) and their parents, we show that the v-MFG test successfully detects an MFG incompatibility effect on RA while simultaneously adjusting for a possible viability loss.
Resumo:
La presente Tesi ha per oggetto lo sviluppo e la validazione di nuovi criteri per la verifica a fatica multiassiale di componenti strutturali metallici . In particolare, i nuovi criteri formulati risultano applicabili a componenti metallici, soggetti ad un’ampia gamma di configurazioni di carico: carichi multiassiali variabili nel tempo, in modo ciclico e random, per alto e basso/medio numero di cicli di carico. Tali criteri costituiscono un utile strumento nell’ambito della valutazione della resistenza/vita a fatica di elementi strutturali metallici, essendo di semplice implementazione, e richiedendo tempi di calcolo piuttosto modesti. Nel primo Capitolo vengono presentate le problematiche relative alla fatica multiassiale, introducendo alcuni aspetti teorici utili a descrivere il meccanismo di danneggiamento a fatica (propagazione della fessura e frattura finale) di componenti strutturali metallici soggetti a carichi variabili nel tempo. Vengono poi presentati i diversi approcci disponibili in letteratura per la verifica a fatica multiassiale di tali componenti, con particolare attenzione all'approccio del piano critico. Infine, vengono definite le grandezze ingegneristiche correlate al piano critico, utilizzate nella progettazione a fatica in presenza di carichi multiassiali ciclici per alto e basso/medio numero di cicli di carico. Il secondo Capitolo è dedicato allo sviluppo di un nuovo criterio per la valutazione della resistenza a fatica di elementi strutturali metallici soggetti a carichi multiassiali ciclici e alto numero di cicli. Il criterio risulta basato sull'approccio del piano critico ed è formulato in termini di tensioni. Lo sviluppo del criterio viene affrontato intervenendo in modo significativo su una precedente formulazione proposta da Carpinteri e collaboratori nel 2011. In particolare, il primo intervento riguarda la determinazione della giacitura del piano critico: nuove espressioni dell'angolo che lega la giacitura del piano critico a quella del piano di frattura vengono implementate nell'algoritmo del criterio. Il secondo intervento è relativo alla definizione dell'ampiezza della tensione tangenziale e un nuovo metodo, noto come Prismatic Hull (PH) method (di Araújo e collaboratori), viene implementato nell'algoritmo. L'affidabilità del criterio viene poi verificata impiegando numerosi dati di prove sperimentali disponibili in letteratura. Nel terzo Capitolo viene proposto un criterio di nuova formulazione per la valutazione della vita a fatica di elementi strutturali metallici soggetti a carichi multiassiali ciclici e basso/medio numero di cicli. Il criterio risulta basato sull'approccio del piano critico, ed è formulato in termini di deformazioni. In particolare, la formulazione proposta trae spunto, come impostazione generale, dal criterio di fatica multiassiale in regime di alto numero di cicli discusso nel secondo Capitolo. Poiché in presenza di deformazioni plastiche significative (come quelle caratterizzanti la fatica per basso/medio numero di cicli di carico) è necessario conoscere il valore del coefficiente efficace di Poisson del materiale, vengono impiegate tre differenti strategie. In particolare, tale coefficiente viene calcolato sia per via analitica, che per via numerica, che impiegando un valore costante frequentemente adottato in letteratura. Successivamente, per validarne l'affidabilità vengono impiegati numerosi dati di prove sperimentali disponibili in letteratura; i risultati numerici sono ottenuti al variare del valore del coefficiente efficace di Poisson. Inoltre, al fine di considerare i significativi gradienti tensionali che si verificano in presenza di discontinuità geometriche, come gli intagli, il criterio viene anche esteso al caso dei componenti strutturali intagliati. Il criterio, riformulato implementando il concetto del volume di controllo proposto da Lazzarin e collaboratori, viene utilizzato per stimare la vita a fatica di provini con un severo intaglio a V, realizzati in lega di titanio grado 5. Il quarto Capitolo è rivolto allo sviluppo di un nuovo criterio per la valutazione del danno a fatica di elementi strutturali metallici soggetti a carichi multiassiali random e alto numero di cicli. Il criterio risulta basato sull'approccio del piano critico ed è formulato nel dominio della frequenza. Lo sviluppo del criterio viene affrontato intervenendo in modo significativo su una precedente formulazione proposta da Carpinteri e collaboratori nel 2014. In particolare, l’intervento riguarda la determinazione della giacitura del piano critico, e nuove espressioni dell'angolo che lega la giacitura del piano critico con quella del piano di frattura vengono implementate nell'algoritmo del criterio. Infine, l’affidabilità del criterio viene verificata impiegando numerosi dati di prove sperimentali disponibili in letteratura.
Resumo:
The standard GTM (generative topographic mapping) algorithm assumes that the data on which it is trained consists of independent, identically distributed (iid) vectors. For time series, however, the iid assumption is a poor approximation. In this paper we show how the GTM algorithm can be extended to model time series by incorporating it as the emission density in a hidden Markov model. Since GTM has discrete hidden states we are able to find a tractable EM algorithm, based on the forward-backward algorithm, to train the model. We illustrate the performance of GTM through time using flight recorder data from a helicopter.
Resumo:
Efficient new Bayesian inference technique is employed for studying critical properties of the Ising linear perceptron and for signal detection in code division multiple access (CDMA). The approach is based on a recently introduced message passing technique for densely connected systems. Here we study both critical and non-critical regimes. Results obtained in the non-critical regime give rise to a highly efficient signal detection algorithm in the context of CDMA; while in the critical regime one observes a first-order transition line that ends in a continuous phase transition point. Finite size effects are also studied. © 2006 Elsevier B.V. All rights reserved.
Resumo:
A sieve plate distillation column has been constructed and interfaced to a minicomputer with the necessary instrumentation for dynamic, estimation and control studies with special bearing on low-cost and noise-free instrumentation. A dynamic simulation of the column with a binary liquid system has been compiled using deterministic models that include fluid dynamics via Brambilla's equation for tray liquid holdup calculations. The simulation predictions have been tested experimentally under steady-state and transient conditions. The simulator's predictions of the tray temperatures have shown reasonably close agreement with the measured values under steady-state conditions and in the face of a step change in the feed rate. A method of extending linear filtering theory to highly nonlinear systems with very nonlinear measurement functional relationships has been proposed and tested by simulation on binary distillation. The simulation results have proved that the proposed methodology can overcome the typical instability problems associated with the Kalman filters. Three extended Kalman filters have been formulated and tested by simulation. The filters have been used to refine a much simplified model sequentially and to estimate parameters such as the unmeasured feed composition using information from the column simulation. It is first assumed that corrupted tray composition measurements are made available to the filter and then corrupted tray temperature measurements are accessed instead. The simulation results have demonstrated the powerful capability of the Kalman filters to overcome the typical hardware problems associated with the operation of on-line analyzers in relation to distillation dynamics and control by, in effect, replacirig them. A method of implementing estimator-aided feedforward (EAFF) control schemes has been proposed and tested by simulation on binary distillation. The results have shown that the EAFF scheme provides much better control and energy conservation than the conventional feedback temperature control in the face of a sustained step change in the feed rate or multiple changes in the feed rate, composition and temperature. Further extensions of this work are recommended as regards simulation, estimation and EAFF control.
Resumo:
Discusses the necessity for the conscious recognition of the phenomenon known as the extended enterprise; this demands that product, process and supply chain design are all considered simultaneously. Structure must be given to the extended enterprise in order to understand and manage it efficaciously. The authors discuss multiple perspectives for doing this, and employ the notions of “3-dimensional concurrent engineering” and “holonic thinking” for conceiving what the structure may look like. Describes a current “action research” project that is investigating potential lead-time reductions within an extended enterprise’s product introduction process. This aims to produce process visualisations, a framework for structuring and sychronising phases and stage-gates within the extended enterprise, and a new simulation tool which will provide a synthetic distributed hypermedia network. These deliverables will be used to play strategic “games” to explore problem issues within the product introduction process that belongs to the extended enterprise, develop teamwork across autonomous companies, and ultimately, contribute to the design of future extended enterprise supply chains.