957 resultados para region-based algorithms


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Bird sex determination using molecular methods has proved to be a valuable tool in different studies. Although it is possible to sex most birds by coupling the CHD assay with others available methods, no sex-determining gene like SRY in mammalians has been identified in birds. The male hypermethylated (MHM) region on the Z chromosome has been found to be hypermethylated in males and hypomethylated in females in birds of the order Galliformes. We analyzed the DNA from feathers of 50 adult chickens to verify the methylation pattern of the MHM region by PCR and the restriction enzyme HpaII (a method named MHM assay). The results, visualized in agarose gel, were compared with PCR amplification of the CHD-Z and CHD-W genes (polyacrylamide gel) and with the birds` phenotype. All males (25) showed hypermethylation of the MHM region, and all females (25) showed hypomethylation. The sexing by MHM assay was in according with phenotype and CHD sexing. To our knowledge, this is the first study that uses the MHM region for sexing birds. Although the real role of the MHM region in the sex determination is still unclear, this could be a universal marker for sexing birds and may be involved in sex determination by its influence on transcriptional processes. The MHM assay could be a good alternative for CHD assay in developmental studies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Rabies virus (RABV) isolates from two species of canids and three species of bats were analyzed by comparing the C-terminal region of the G gene and the G-L intergenic region of the virus genome. Intercluster identities for the genetic sequences of the isolates showed both regions to be poorly conserved. Phylogenetic trees were generated by the neighbor-joining and maximum parsimony methods, and the results were found to agree between the two methods for both regions. Putative amino acid sequences obtained from the G gene were also analyzed, and genetic markers were identified. Our results suggest that different genetic lineages of RABV are adapted to different animal species in Brazil.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Short-nosed bandicoots, Isoodon, have undergone marked range contractions since European colonisation of Australia and are currently divided into many subspecies, the validity of which is debated. Discriminant function analysis of morphology and a phylogeny of Isoodon based on mtDNA control region sequences indicate a clear split between two of the three recognised species, I. macrourus and I. obesulus/auratus. However, while all previously recognised taxa within the I. obesulus/auratus group are morphologically distinct, I. auratus and I. obesulus are not phylogenetically distinct for mtDNA. The genetic divergence between I. obesulus and I. auratus (2.6%) is similar to that found among geographic isolates of the former (I. o. obesulus and I. o. peninsulae: 2.7%). Further, the divergence between geographically close populations of two different species (I. o. obesulus from Western Australia and I. a. barrowensis: 1.2%) is smaller than that among subspecies within I. auratus (I. a. barrowensis and I. auratus from northern Western Australia: 1.7%). A newly discovered population of Isoodon in the Lamb Range, far north Queensland, sympatric with a population of I. m. torosus, is shown to represent a range extension of I. o. peninsulae (350 km). It seems plausible that what is currently considered as two species, I. obesulus and I. auratus, was once one continuous species now represented by isolated populations that have diverged morphologically as a consequence of adaptation to the diverse environments that occur throughout their range. The taxonomy of these populations is discussed in relation to their morphological distinctiveness and genetic similarity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper delineates the development of a prototype hybrid knowledge-based system for the optimum design of liquid retaining structures by coupling the blackboard architecture, an expert system shell VISUAL RULE STUDIO and genetic algorithm (GA). Through custom-built interactive graphical user interfaces under a user-friendly environment, the user is directed throughout the design process, which includes preliminary design, load specification, model generation, finite element analysis, code compliance checking, and member sizing optimization. For structural optimization, GA is applied to the minimum cost design of structural systems with discrete reinforced concrete sections. The design of a typical example of the liquid retaining structure is illustrated. The results demonstrate extraordinarily converging speed as near-optimal solutions are acquired after merely exploration of a small portion of the search space. This system can act as a consultant to assist novice designers in the design of liquid retaining structures.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Multiprocessors, particularly in the form of multicores, are becoming standard building blocks for executing reliable software. But their use for applications with hard real-time requirements is non-trivial. Well-known realtime scheduling algorithms in the uniprocessor context (Rate-Monotonic [1] or Earliest-Deadline-First [1]) do not perform well on multiprocessors. For this reason the scientific community in the area of real-time systems has produced new algorithms specifically for multiprocessors. In the meanwhile, a proposal [2] exists for extending the Ada language with new basic constructs which can be used for implementing new algorithms for real-time scheduling; the family of task splitting algorithms is one of them which was emphasized in the proposal [2]. Consequently, assessing whether existing task splitting multiprocessor scheduling algorithms can be implemented with these constructs is paramount. In this paper we present a list of state-of-art task-splitting multiprocessor scheduling algorithms and, for each of them, we present detailed Ada code that uses the new constructs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the subject of Electrical and Computer Engineering

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The genomic sequences of the Envelope-Non-Structural protein 1 junction region (E/NS1) of 84 DEN-1 and 22 DEN-2 isolates from Brazil were determined. Most of these strains were isolated in the period from 1995 to 2001 in endemic and regions of recent dengue transmission in São Paulo State. Sequence data for DEN-1 and DEN-2 utilized in phylogenetic and split decomposition analyses also include sequences deposited in GenBank from different regions of Brazil and of the world. Phylogenetic analyses were done using both maximum likelihood and Bayesian approaches. Results for both DEN-1 and DEN-2 data are ambiguous, and support for most tree bipartitions are generally poor, suggesting that E/NS1 region does not contain enough information for recovering phylogenetic relationships among DEN-1 and DEN-2 sequences used in this study. The network graph generated in the split decomposition analysis of DEN-1 does not show evidence of grouping sequences according to country, region and clades. While the network for DEN-2 also shows ambiguities among DEN-2 sequences, it suggests that Brazilian sequences may belong to distinct subtypes of genotype III.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

INTRODUCTION: The precise identification of the genetic variants of the dengue virus is important to understand its dispersion and virulence patterns and to identify the strains responsible for epidemic outbreaks. This study investigated the genetic variants of the capsid-premembrane junction region fragment in the dengue virus serotypes 1 and 2 (DENV1-2). METHODS: Samples from 11 municipalities in the State of Paraná, Brazil, were provided by the Central Laboratory of Paraná. They were isolated from the cell culture line C6/36 (Aedes albopictus) and were positive for indirect immunofluorescence. Ribonucleic acid (RNA) extracted from these samples was submitted to the reverse transcription polymerase chain reaction (RT-PCR) and nested PCR. RESULTS: RT-PCR revealed that 4 of the samples were co-infected with both serotypes. The isolated DENV-1 sequences were 95-100% similar to the sequences of other serotype 1 strains deposited in GenBank. Similarly, the isolated DENV-2 sequences were 98-100% similar to other serotype 2 sequences in GenBank. According to our neighbor-joining tree, all strains obtained in this study belonged to genotype V of DENV-1. The DENV-2 strains, by contrast, belonged to the American/Asian genotypes. CONCLUSIONS: The monitoring of circulating strains is an important tool to detect the migration of virus subtypes involved in dengue epidemics.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Clayish earth-based mortars can be considered eco-efficient products for indoor plastering since they can contribute to improve important aspects of building performance and sustainability. Apart from being products with low embodied energy when compared to other types of mortars used for interior plastering, mainly due to the use raw clay as natural binder, earth-based plasters may give a significant contribution for health and comfort of inhabitants. Due to high hygroscopicity of clay minerals, earth-based mortars present a high adsorption and desorption capacity, particularly when compared to other type of mortars for interior plastering. This capacity allows earth-based plasters to act as a moisture buffer, balancing the relative humidity of the indoor environment and, simultaneously, acting as a passive removal material, improving air quality. Therefore, earth-based plasters may also passively promote the energy efficiency of buildings, since they may contribute to decreasing the needs of mechanical ventilation and air conditioning. This study is part of an ongoing research regarding earth-based plasters and focuses on mortars specifically formulated with soils extracted from Portuguese ‘Barrocal’ region, in Algarve sedimentary basin. This region presents high potential for interior plastering due to regional geomorphology, that promote the occurrence of illitic soils characterized by a high adsorption capacity and low expansibility. More specifically, this study aims to assess how clayish earth and sand ratio of mortars formulation can influence the physical and mechanical properties of plasters. For this assessment four mortars were formulated with different volumetric proportions of clayish earth and siliceous sand. The results from the physical and mechanical characterization confirmed the significantly low linear shrinkage of all the four mortars, as well as their extraordinary adsorption-desorption capacity. These results presented a positive correlation with mortars´ clayish earth content and are consistent with the mineralogical analysis, that confirmed illite as the prevalent clay mineral in the clayish earth used for this study. Regarding mechanical resistance, although the promising results of the adhesion test, the flexural and compressive strength results suggest that the mechanical resistance of these mortars should be slightly improved. Considering the present results the mortars mechanical resistance improvement may be achieved through the formulation of mortars with higher clayish earth content, or alternatively, through the addition of natural fibers to mortars formulation, very common in this type of mortars. Both those options will be investigated in future research.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Immune systems have been used in the last years to inspire approaches for several computational problems. This paper focus on behavioural biometric authentication algorithms’ accuracy enhancement by using them more than once and with different thresholds in order to first simulate the protection provided by the skin and then look for known outside entities, like lymphocytes do. The paper describes the principles that support the application of this approach to Keystroke Dynamics, an authentication biometric technology that decides on the legitimacy of a user based on his typing pattern captured on he enters the username and/or the password and, as a proof of concept, the accuracy levels of one keystroke dynamics algorithm when applied to five legitimate users of a system both in the traditional and in the immune inspired approaches are calculated and the obtained results are compared.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The chemical composition of propolis is affected by environmental factors and harvest season, making it difficult to standardize its extracts for medicinal usage. By detecting a typical chemical profile associated with propolis from a specific production region or season, certain types of propolis may be used to obtain a specific pharmacological activity. In this study, propolis from three agroecological regions (plain, plateau, and highlands) from southern Brazil, collected over the four seasons of 2010, were investigated through a novel NMR-based metabolomics data analysis workflow. Chemometrics and machine learning algorithms (PLS-DA and RF), including methods to estimate variable importance in classification, were used in this study. The machine learning and feature selection methods permitted construction of models for propolis sample classification with high accuracy (>75%, reaching 90% in the best case), better discriminating samples regarding their collection seasons comparatively to the harvest regions. PLS-DA and RF allowed the identification of biomarkers for sample discrimination, expanding the set of discriminating features and adding relevant information for the identification of the class-determining metabolites. The NMR-based metabolomics analytical platform, coupled to bioinformatic tools, allowed characterization and classification of Brazilian propolis samples regarding the metabolite signature of important compounds, i.e., chemical fingerprint, harvest seasons, and production regions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.