989 resultados para Vector representation
Resumo:
Background In 2011, a variant of West Nile virus Kunjin strain (WNVKUN) caused an unprecedented epidemic of neurological disease in horses in southeast Australia, resulting in almost 1,000 cases and a 9% fatality rate. We investigated whether increased fitness of the virus in the primary vector, Culex annulirostris, and another potential vector, Culex australicus, contributed to the widespread nature of the outbreak. Methods Mosquitoes were exposed to infectious blood meals containing either the virus strain responsible for the outbreak, designated WNVKUN2011, or WNVKUN2009, a strain of low virulence that is typical of historical strains of this virus. WNVKUN infection in mosquito samples was detected using a fixed cell culture enzyme immunoassay and a WNVKUN- specific monoclonal antibody. Probit analysis was used to determine mosquito susceptibility to infection. Infection, dissemination and transmission rates for selected days post-exposure were compared using Fisher’s exact test. Virus titers in bodies and saliva expectorates were compared using t-tests. Results There were few significant differences between the two virus strains in the susceptibility of Cx. annulirostris to infection, the kinetics of virus replication and the ability of this mosquito species to transmit either strain. Both strains were transmitted by Cx. annulirostris for the first time on day 5 post-exposure. The highest transmission rates (proportion of mosquitoes with virus detected in saliva) observed were 68% for WNVKUN2011 on day 12 and 72% for WNVKUN2009 on day 14. On days 12 and 14 post-exposure, significantly more WNVKUN2011 than WNVKUN2009 was expectorated by infected mosquitoes. Infection, dissemination and transmission rates of the two strains were not significantly different in Culex australicus. However, transmission rates and the amount of virus expectorated were significantly lower in Cx. australicus than Cx. annulirostris. Conclusions The higher amount of WNVKUN2011 expectorated by infected mosquitoes may be an indication that this virus strain is transmitted more efficiently by Cx. annulirostris compared to other WNVKUN strains. Combined with other factors, such as a convergence of abundant mosquito and wading bird populations, and mammalian and avian feeding behaviour by Cx. annulirostris, this may have contributed to the scale of the 2011 equine epidemic.
Resumo:
Species identification based on short sequences of DNA markers, that is, DNA barcoding, has emerged as an integral part of modern taxonomy. However, software for the analysis of large and multilocus barcoding data sets is scarce. The Basic Local Alignment Search Tool (BLAST) is currently the fastest tool capable of handling large databases (e.g. >5000 sequences), but its accuracy is a concern and has been criticized for its local optimization. However, current more accurate software requires sequence alignment or complex calculations, which are time-consuming when dealing with large data sets during data preprocessing or during the search stage. Therefore, it is imperative to develop a practical program for both accurate and scalable species identification for DNA barcoding. In this context, we present VIP Barcoding: a user-friendly software in graphical user interface for rapid DNA barcoding. It adopts a hybrid, two-stage algorithm. First, an alignment-free composition vector (CV) method is utilized to reduce searching space by screening a reference database. The alignment-based K2P distance nearest-neighbour method is then employed to analyse the smaller data set generated in the first stage. In comparison with other software, we demonstrate that VIP Barcoding has (i) higher accuracy than Blastn and several alignment-free methods and (ii) higher scalability than alignment-based distance methods and character-based methods. These results suggest that this platform is able to deal with both large-scale and multilocus barcoding data with accuracy and can contribute to DNA barcoding for modern taxonomy. VIP Barcoding is free and available at http://msl.sls.cuhk.edu.hk/vipbarcoding/.
Resumo:
Background: Although lentiviral vectors have been widely used for in vitro and in vivo gene therapy researches, there have been few studies systematically examining various conditions that may affect the determination of the number of viable vector particles in a vector preparation and the use of Multiplicity of Infection (MOI) as a parameter for the prediction of gene transfer events. Methods: Lentiviral vectors encoding a marker gene were packaged and supernatants concentrated. The number of viable vector particles was determined by in vitro transduction and fluorescent microscopy and FACs analyses. Various factors that may affect the transduction process, such as vector inoculum volume, target cell number and type, vector decay, variable vector - target cell contact and adsorption periods were studied. MOI between 0-32 was assessed on commonly used cell lines as well as a new cell line. Results: We demonstrated that the resulting values of lentiviral vector titre varied with changes of conditions in the transduction process, including inoculum volume of the vector, the type and number of target cells, vector stability and the length of period of the vector adsorption to target cells. Vector inoculum and the number of target cells determine the frequencies of gene transfer event, although not proportionally. Vector exposure time to target cells also influenced transduction results. Varying these parameters resulted in a greater than 50-fold differences in the vector titre from the same vector stock. Commonly used cell lines in vector titration were less sensitive to lentiviral vector-mediated gene transfer than a new cell line, FRL 19. Within 0-32 of MOI used transducing four different cell lines, the higher the MOI applied, the higher the efficiency of gene transfer obtained. Conclusion: Several variables in the transduction process affected in in vitro vector titration and resulted in vastly different values from the same vector stock, thus complicating the use of MOI for predicting gene transfer events. Commonly used target cell lines underestimated vector titre. However, within a certain range of MOI, it is possible that, if strictly controlled conditions are observed in the vector titration process, including the use of a sensitive cell line, such as FRL 19 for vector titration, lentivector-mediated gene transfer events could be predicted. © 2004 Zhang et al; licensee BioMed Central Ltd.
Resumo:
Frogs have received increasing attention due to their effectiveness for indicating the environment change. Therefore, it is important to monitor and assess frogs. With the development of sensor techniques, large volumes of audio data (including frog calls) have been collected and need to be analysed. After transforming the audio data into its spectrogram representation using short-time Fourier transform, the visual inspection of this representation motivates us to use image processing techniques for analysing audio data. Applying acoustic event detection (AED) method to spectrograms, acoustic events are firstly detected from which ridges are extracted. Three feature sets, Mel-frequency cepstral coefficients (MFCCs), AED feature set and ridge feature set, are then used for frog call classification with a support vector machine classifier. Fifteen frog species widely spread in Queensland, Australia, are selected to evaluate the proposed method. The experimental results show that ridge feature set can achieve an average classification accuracy of 74.73% which outperforms the MFCCs (38.99%) and AED feature set (67.78%).
Resumo:
There is a well-founded ethical concern in the present regarding the question Ήow can we include everybody's voice equally in the framing of reviews?' This paper is a response to the complexities that inhere in that question. It is not about Review of Educational Research (RER) as a specific site but about the systems of reasoning that construct the opening question about reviews and that suggest possible answers, including the response: 'What is voice?'
Resumo:
In this paper, downscaling models are developed using a support vector machine (SVM) for obtaining projections of monthly mean maximum and minimum temperatures (T-max and T-min) to river-basin scale. The effectiveness of the model is demonstrated through application to downscale the predictands for the catchment of the Malaprabha reservoir in India, which is considered to be a climatically sensitive region. The probable predictor variables are extracted from (1) the National Centers for Environmental Prediction (NCEP) reanalysis dataset for the period 1978-2000, and (2) the simulations from the third-generation Canadian Coupled Global Climate Model (CGCM3) for emission scenarios A1B, A2, B1 and COMMIT for the period 1978-2100. The predictor variables are classified into three groups, namely A, B and C. Large-scale atmospheric variables Such as air temperature, zonal and meridional wind velocities at 925 nib which are often used for downscaling temperature are considered as predictors in Group A. Surface flux variables such as latent heat (LH), sensible heat, shortwave radiation and longwave radiation fluxes, which control temperature of the Earth's surface are tried as plausible predictors in Group B. Group C comprises of all the predictor variables in both the Groups A and B. The scatter plots and cross-correlations are used for verifying the reliability of the simulation of the predictor variables by the CGCM3 and to Study the predictor-predictand relationships. The impact of trend in predictor variables on downscaled temperature was studied. The predictor, air temperature at 925 mb showed an increasing trend, while the rest of the predictors showed no trend. The performance of the SVM models that are developed, one for each combination of predictor group, predictand, calibration period and location-based stratification (land, land and ocean) of climate variables, was evaluated. In general, the models which use predictor variables pertaining to land surface improved the performance of SVM models for downscaling T-max and T-min
Resumo:
While frame-invariant solutions for arbitrarily large rotational deformations have been reported through the orthogonal matrix parametrization, derivation of such solutions purely through a rotation vector parametrization, which uses only three parameters and provides a parsimonious storage of rotations, is novel and constitutes the subject of this paper. In particular, we employ interpolations of relative rotations and a new rotation vector update for a strain-objective finite element formulation in the material framework. We show that the update provides either the desired rotation vector or its complement. This rules out an additive interpolation of total rotation vectors at the nodes. Hence, interpolations of relative rotation vectors are used. Through numerical examples, we show that combining the proposed update with interpolations of relative rotations yields frame-invariant and path-independent numerical solutions. Advantages of the present approach vis-a-vis the updated Lagrangian formulation are also analyzed.
Resumo:
This study investigates the potential of Relevance Vector Machine (RVM)-based approach to predict the ultimate capacity of laterally loaded pile in clay. RVM is a sparse approximate Bayesian kernel method. It can be seen as a probabilistic version of support vector machine. It provides much sparser regressors without compromising performance, and kernel bases give a small but worthwhile improvement in performance. RVM model outperforms the two other models based on root-mean-square-error (RMSE) and mean-absolute-error (MAE) performance criteria. It also stimates the prediction variance. The results presented in this paper clearly highlight that the RVM is a robust tool for prediction Of ultimate capacity of laterally loaded piles in clay.
Resumo:
In this paper, we present a new approach for velocity vector imaging and time-resolved measurements of strain rates in the wall of human arteries using MRI and we prove its feasibility on two examples: in vitro on a phantom and in vivo on the carotid artery of a human subject. Results point out the promising potential of this approach for investigating the mechanics of arterial tissues in vivo.
Resumo:
A forest of quadtrees is a refinement of a quadtree data structure that is used to represent planar regions. A forest of quadtrees provides space savings over regular quadtrees by concentrating vital information. The paper presents some of the properties of a forest of quadtrees and studies the storage requirements for the case in which a single 2m × 2m region is equally likely to occur in any position within a 2n × 2n image. Space and time efficiency are investigated for the forest-of-quadtrees representation as compared with the quadtree representation for various cases.
Resumo:
Following the method of Ioffe and Smilga, the propagation of the baryon current in an external constant axial-vector field is considered. The close similarity of the operator-product expansion with and without an external field is shown to arise from the chiral invariance of gauge interactions in perturbation theory. Several sum rules corresponding to various invariants both for the nucleon and the hyperons are derived. The analysis of the sum rules is carried out by two independent methods, one called the ratio method and the other called the continuum method, paying special attention to the nondiagonal transitions induced by the external field between the ground state and excited states. Up to operators of dimension six, two new external-field-induced vacuum expectation values enter the calculations. Previous work determining these expectation values from PCAC (partial conservation of axial-vector current) are utilized. Our determination from the sum rules of the nucleon axial-vector renormalization constant GA, as well as the Cabibbo coupling constants in the SU3-symmetric limit (ms=0), is in reasonable accord with the experimental values. Uncertainties in the analysis are pointed out. The case of broken flavor SU3 symmetry is also considered. While in the ratio method, the results are stable for variation of the fiducial interval of the Borel mass parameter over which the left-hand side and the right-hand side of the sum rules are matched, in the continuum method the results are less stable. Another set of sum rules determines the value of the linear combination 7F-5D to be ≊0, or D/(F+D)≊(7/12). .AE
Resumo:
Abstract-To detect errors in decision tables one needs to decide whether a given set of constraints is feasible or not. This paper describes an algorithm to do so when the constraints are linear in variables that take only integer values. Decision tables with such constraints occur frequently in business data processing and in nonnumeric applications. The aim of the algorithm is to exploit. the abundance of very simple constraints that occur in typical decision table contexts. Essentially, the algorithm is a backtrack procedure where the the solution space is pruned by using the set of simple constrains. After some simplications, the simple constraints are captured in an acyclic directed graph with weighted edges. Further, only those partial vectors are considered from extension which can be extended to assignments that will at least satisfy the simple constraints. This is how pruning of the solution space is achieved. For every partial assignment considered, the graph representation of the simple constraints provides a lower bound for each variable which is not yet assigned a value. These lower bounds play a vital role in the algorithm and they are obtained in an efficient manner by updating older lower bounds. Our present algorithm also incorporates an idea by which it can be checked whether or not an (m - 2)-ary vector can be extended to a solution vector of m components, thereby backtracking is reduced by one component.
Resumo:
We address the issue of complexity for vector quantization (VQ) of wide-band speech LSF (line spectrum frequency) parameters. The recently proposed switched split VQ (SSVQ) method provides better rate-distortion (R/D) performance than the traditional split VQ (SVQ) method, even at the requirement of lower computational complexity. but at the expense of much higher memory. We develop the two stage SVQ (TsSVQ) method, by which we gain both the memory and computational advantages and still retain good R/D performance. The proposed TsSVQ method uses a full dimensional quantizer in its first stage for exploiting all the higher dimensional coding advantages and then, uses an SVQ method for quantizing the residual vector in the second stage so as to reduce the complexity. We also develop a transform domain residual coding method in this two stage architecture such that it further reduces the computational complexity. To design an effective residual codebook in the second stage, variance normalization of Voronoi regions is carried out which leads to the design of two new methods, referred to as normalized two stage SVQ (NTsSVQ) and normalized two stage transform domain SVQ (NTsTrSVQ). These two new methods have complimentary strengths and hence, they are combined in a switched VQ mode which leads to the further improvement in R/D performance, but retaining the low complexity requirement. We evaluate the performances of new methods for wide-band speech LSF parameter quantization and show their advantages over established SVQ and SSVQ methods.
Resumo:
Kirjallisuuden- ja kulttuurintutkimus on viimeisten kolmen vuosikymmenen aikana tullut yhä enenevässä määrin tietoiseksi tieteen ja taiteen suhteen monimutkaisesta luonteesta. Nykyään näiden kahden kulttuurin tutkimus muodostaa oman kenttänsä, jolla niiden suhdetta tarkastellaan ennen kaikkea dynaamisena vuorovaikutuksena, joka heijastaa kulttuurimme kieltä, arvoja ja ideologisia sisältöjä. Toisin kuin aiemmat näkemykset, jotka pitävät tiedettä ja taidetta toisilleen enemmän tai vähemmän vastakkaisina pyrkimyksinä, nykytutkimus lähtee oletuksesta, jonka mukaan ne ovat kulttuurillisesti rakentuneita diskursseja, jotka kohtaavat usein samankaltaisia todellisuuden mallintamiseen liittyviä ongelmia, vaikka niiden käyttämät metodit eroavatkin toisistaan. Väitöskirjani keskittyy yllä mainitun suhteen osa-alueista popularisoidun tietokirjallisuuden (muun muassa Paul Davies, James Gleick ja Richard Dawkins) käyttämän kielen ja luonnontieteistä ideoita ammentavan kaunokirjallisuuden (muun muassa Jeanette Winterson, Tom Stoppard ja Richard Powers) hyödyntämien keinojen tarkasteluun nojautuen yli 30 teoksen kattavaa aineistoa koskevaan tyylin ja teemojen tekstianalyysiin. Populaarin tietokirjallisuuden osalta tarkoituksenani on osoittaa, että sen käyttämä kieli rakentuu huomattavassa määrin sellaisille rakenteille, jotka tarjoavat mahdollisuuden esittää todellisuutta koskevia argumentteja mahdollisimman vakuuttavalla tavalla. Tässä tehtävässä monilla klassisen retoriikan määrittelemillä kuvioilla on tärkeä rooli, koska ne auttavat liittämään sanotun sisällön ja muodon tiukasti toisiinsa: retoristen kuvioiden käyttö ei näin ollen edusta pelkkää tyylikeinoa, vaan se myös usein kiteyttää argumenttien taustalla olevat tieteenfilosofiset olettamukset ja auttaa vakiinnuttamaan argumentoinnin logiikan. Koska monet aikaisemmin ilmestyneistä tutkimuksista ovat keskittyneet pelkästään metaforan rooliin tieteellisissä argumenteissa, tämä väitöskirja pyrkii laajentamaan tutkimuskenttää analysoimalla myös toisenlaisten kuvioiden käyttöä. Osoitan myös, että retoristen kuvioiden käyttö muodostaa yhtymäkohdan tieteellisiä ideoita hyödyntävään kaunokirjallisuuteen. Siinä missä popularisoitu tiede käyttää retoriikkaa vahvistaakseen sekä argumentatiivisia että kaunokirjallisia ominaisuuksiaan, kuvaa tällainen sanataide tiedettä tavoilla, jotka usein heijastelevat tietokirjallisuuden kielellisiä rakenteita. Toisaalta on myös mahdollista nähdä, miten kaunokirjallisuuden keinot heijastuvat popularisoidun tieteen kerrontatapoihin ja kieleen todistaen kahden kulttuurin dynaamisesta vuorovaikutuksesta. Nykyaikaisen populaaritieteen retoristen elementtien ja kaunokirjallisuuden keinojen vertailu näyttää lisäksi, kuinka tiede ja taide osallistuvat keskusteluun kulttuurimme tiettyjen peruskäsitteiden kuten identiteetin, tiedon ja ajan merkityksestä. Tällä tavoin on mahdollista nähdä, että molemmat ovat perustavanlaatuisia osia merkityksenantoprosessissa, jonka kautta niin tieteelliset ideat kuin ihmiselämän suuret kysymyksetkin saavat kulttuurillisesti rakentuneen merkityksensä.
Resumo:
This paper proposes a multilevel inverter configuration which produces a hexagonal voltage space vector structure in the lower modulation region and a 12-sided polygonal space vector structure in the overmodulation region. A conventional multilevel inverter produces 6n plusmn 1 (n = odd) harmonics in the phase voltage during overmodulation and in the extreme square-wave mode of operation. However, this inverter produces a 12-sided polygonal space vector location, leading to the elimination of 6n plusmn 1 (n = odd) harmonics in the overmodulation region extending to a final 12-step mode of operation with a smooth transition. The benefits of this arrangement are lower losses and reduced torque pulsation in an induction motor drive fed from this converter at higher modulation indexes. The inverter is fabricated by using three conventional cascaded two-level inverters with asymmetric dc-bus voltages. A comparative simulation study of the harmonic distortion in the phase voltage and associated losses in conventional multilevel inverters and that of the proposed inverter is presented in this paper. Experimental validation on a prototype shows that the proposed converter is suitable for high-power applications because of low harmonic distortion and low losses.