957 resultados para region-based algorithms
Resumo:
Machine learning is widely adopted to decode multi-variate neural time series, including electroencephalographic (EEG) and single-cell recordings. Recent solutions based on deep learning (DL) outperformed traditional decoders by automatically extracting relevant discriminative features from raw or minimally pre-processed signals. Convolutional Neural Networks (CNNs) have been successfully applied to EEG and are the most common DL-based EEG decoders in the state-of-the-art (SOA). However, the current research is affected by some limitations. SOA CNNs for EEG decoding usually exploit deep and heavy structures with the risk of overfitting small datasets, and architectures are often defined empirically. Furthermore, CNNs are mainly validated by designing within-subject decoders. Crucially, the automatically learned features mainly remain unexplored; conversely, interpreting these features may be of great value to use decoders also as analysis tools, highlighting neural signatures underlying the different decoded brain or behavioral states in a data-driven way. Lastly, SOA DL-based algorithms used to decode single-cell recordings rely on more complex, slower to train and less interpretable networks than CNNs, and the use of CNNs with these signals has not been investigated. This PhD research addresses the previous limitations, with reference to P300 and motor decoding from EEG, and motor decoding from single-neuron activity. CNNs were designed light, compact, and interpretable. Moreover, multiple training strategies were adopted, including transfer learning, which could reduce training times promoting the application of CNNs in practice. Furthermore, CNN-based EEG analyses were proposed to study neural features in the spatial, temporal and frequency domains, and proved to better highlight and enhance relevant neural features related to P300 and motor states than canonical EEG analyses. Remarkably, these analyses could be used, in perspective, to design novel EEG biomarkers for neurological or neurodevelopmental disorders. Lastly, CNNs were developed to decode single-neuron activity, providing a better compromise between performance and model complexity.
Resumo:
In pursuit of aligning with the European Union's ambitious target of achieving a carbon-neutral economy by 2050, researchers, vehicle manufacturers, and original equipment manufacturers have been at the forefront of exploring cutting-edge technologies for internal combustion engines. The introduction of these technologies has significantly increased the effort required to calibrate the models implemented in the engine control units. Consequently the development of tools that reduce costs and the time required during the experimental phases, has become imperative. Additionally, to comply with ever-stricter limits on 〖"CO" 〗_"2" emissions, it is crucial to develop advanced control systems that enhance traditional engine management systems in order to reduce fuel consumption. Furthermore, the introduction of new homologation cycles, such as the real driving emissions cycle, compels manufacturers to bridge the gap between engine operation in laboratory tests and real-world conditions. Within this context, this thesis showcases the performance and cost benefits achievable through the implementation of an auto-adaptive closed-loop control system, leveraging in-cylinder pressure sensors in a heavy-duty diesel engine designed for mining applications. Additionally, the thesis explores the promising prospect of real-time self-adaptive machine learning models, particularly neural networks, to develop an automatic system, using in-cylinder pressure sensors for the precise calibration of the target combustion phase and optimal spark advance in a spark-ignition engines. To facilitate the application of these combustion process feedback-based algorithms in production applications, the thesis discusses the results obtained from the development of a cost-effective sensor for indirect cylinder pressure measurement. Finally, to ensure the quality control of the proposed affordable sensor, the thesis provides a comprehensive account of the design and validation process for a piezoelectric washer test system.
Resumo:
The focus of the thesis is the application of different attitude’s determination algorithms on data evaluated with MEMS sensor using a board provided by University of Bologna. MEMS sensors are a very cheap options to obtain acceleration, and angular velocity. The use of magnetometers based on Hall effect can provide further data. The disadvantage is that they have a lot of noise and drift which can affects the results. The different algorithms that have been used are: pitch and roll from accelerometer, yaw from magnetometer, attitude from gyroscope, TRIAD, QUEST, Magdwick, Mahony, Extended Kalman filter, Kalman GPS aided INS. In this work the algorithms have been rewritten to fit perfectly with the data provided from the MEMS sensor. The data collected by the board are acceleration on the three axis, angular velocity on the three axis, magnetic fields on the three axis, and latitude, longitude, and altitude from the GPS. Several tests and comparisons have been carried out installing the electric board on different vehicles operating in the air and on ground. The conclusion that can be drawn from this study is that the Magdwich filter is the best trade-off between computational capabilities required and results obtained. If attitude angles are obtained from accelerometers, gyroscopes, and magnetometer, inconsistent data are obtained for cases where high vibrations levels are noticed. On the other hand, Kalman filter based algorithms requires a high computational burden. TRIAD and QUEST algorithms doesn’t perform as well as filters.
Resumo:
Many studies have shown the considerable potential for the application of remote-sensing-based methods for deriving estimates of lake water quality. However, the reliable application of these methods across time and space is complicated by the diversity of lake types, sensor configuration, and the multitude of different algorithms proposed. This study tested one operational and 46 empirical algorithms sourced from the peer-reviewed literature that have individually shown potential for estimating lake water quality properties in the form of chlorophyll-a (algal biomass) and Secchi disc depth (SDD) (water transparency) in independent studies. Nearly half (19) of the algorithms were unsuitable for use with the remote-sensing data available for this study. The remaining 28 were assessed using the Terra/Aqua satellite archive to identify the best performing algorithms in terms of accuracy and transferability within the period 2001–2004 in four test lakes, namely Vänern, Vättern, Geneva, and Balaton. These lakes represent the broad continuum of large European lake types, varying in terms of eco-region (latitude/longitude and altitude), morphology, mixing regime, and trophic status. All algorithms were tested for each lake separately and combined to assess the degree of their applicability in ecologically different sites. None of the algorithms assessed in this study exhibited promise when all four lakes were combined into a single data set and most algorithms performed poorly even for specific lake types. A chlorophyll-a retrieval algorithm originally developed for eutrophic lakes showed the most promising results (R2 = 0.59) in oligotrophic lakes. Two SDD retrieval algorithms, one originally developed for turbid lakes and the other for lakes with various characteristics, exhibited promising results in relatively less turbid lakes (R2 = 0.62 and 0.76, respectively). The results presented here highlight the complexity associated with remotely sensed lake water quality estimates and the high degree of uncertainty due to various limitations, including the lake water optical properties and the choice of methods.
Resumo:
The Argentine hake, Merluccius hubbsi, a demersal-pelagic species found from Rio de Janeiro, Brazil to the Tierra del Fuego, Argentina, has become an important target of the Brazilian bottom-trawler fleet since 2001. Earlier studies focusing on the species have suggested that more than one stock might occur off the Brazilian coast, in accordance with environmental features. In order to evaluate this hypothesis, fish were collected from four different areas in the Brazilian waters in which the hake is distributed, during the summers and winters of 1996-2001 and 2004, the females being used to analyze and compare spatial-temporal variations in ovarian maturation. Gonad indexes were also applied for the same purpose. Results indicate a north-south spawning gradient occurring as from summer at around 21°S to winter near 34°S, leading to the identification of two distinct stocks: one located between 21°S and 29°S (Southeastern stock) and the other between 29°S and 34°S (Southern stock), this latter shared with Uruguay and Argentina. Brazilian stocks present clear signs of overexploitation, the situation calling for an urgent solution.
Resumo:
Identification of animals that are decomposing or have been run over or burnt and cannot be visually identified is a problem in the surveillance and control of infectious diseases. Many of these animals are wild and represent a valuable source of information for epidemiologic research as they may be carriers of an infectious agent. This article discusses the results obtained using a method for identifying mammals genetically by sequencing their mitochondrial DNA control region. Fourteen species were analyzed and identified. These included the main reservoirs and transmitters of rabies virus, namely, canids, chiroptera and primates. The results prove that this method of genetic identification is both efficient and simple and that it can be used in the surveillance of infectious diseases which includes mammals in their epidemiologic cycle, such as rabies.
Resumo:
Objective: The biochemical alterations between inflammatory fibrous hyperplasia (IFH) and normal tissues of buccal mucosa were probed by using the FT-Raman spectroscopy technique. The aim was to find the minimal set of Raman bands that would furnish the best discrimination. Background: Raman-based optical biopsy is a widely recognized potential technique for noninvasive real-time diagnosis. However, few studies had been devoted to the discrimination of very common subtle or early pathologic states as inflammatory processes that are always present on, for example, cancer lesion borders. Methods: Seventy spectra of IFH from 14 patients were compared with 30 spectra of normal tissues from six patients. The statistical analysis was performed with principal components analysis and soft independent modeling class analogy cross-validated, leave-one-out methods. Results: Bands close to 574, 1,100, 1,250 to 1,350, and 1,500 cm(-1) (mainly amino acids and collagen bands) showed the main intragroup variations that are due to the acanthosis process in the IFH epithelium. The 1,200 (C-C aromatic/DNA), 1,350 (CH(2) bending/collagen 1), and 1,730 cm(-1) (collagen III) regions presented the main intergroup variations. This finding was interpreted as originating in an extracellular matrix-degeneration process occurring in the inflammatory tissues. The statistical analysis results indicated that the best discrimination capability (sensitivity of 95% and specificity of 100%) was found by using the 530-580 cm(-1) spectral region. Conclusions: The existence of this narrow spectral window enabling normal and inflammatory diagnosis also had useful implications for an in vivo dispersive Raman setup for clinical applications.
Resumo:
Intergenic spacers of chloroplast DNA (cpDNA) are very useful in phylogenetic and population genetic studies of plant species, to study their potential integration in phylogenetic analysis. The non-coding trnE-trnT intergenic spacer of cpDNA was analyzed to assess the nucleotide sequence polymorphism of 16 Solanaceae species and to estimate its ability to contribute to the resolution of phylogenetic studies of this group. Multiple alignments of DNA sequences of trnE-trnT intergenic spacer made the identification of nucleotide variability in this region possible and the phylogeny was estimated by maximum parsimony and rooted with Convolvulaceae Ipomoea batalas, the most closely related family. Besides, this intergenic spacer was tested for the phylogenetic ability to differentiate taxonomic levels. For this purpose, species from four other families were analyzed and compared with Solanaceae species. Results confirmed polymorphism in the trnE-trnT region at different taxonomic levels.
Resumo:
Background: Hepatitis C virus (HCV) genotyping is the most significant predictor of the response to antiviral therapy. The aim of this study was to develop and evaluate a novel real-time PCR method for HCV genotyping based on the NS5B region. Methodology/Principal Findings: Two triplex reaction sets were designed, one to detect genotypes 1a, 1b and 3a; and another to detect genotypes 2a, 2b, and 2c. This approach had an overall sensitivity of 97.0%, detecting 295 of the 304 tested samples. All samples genotyped by real-time PCR had the same type that was assigned using LiPA version 1 (Line in Probe Assay). Although LiPA v. 1 was not able to subtype 68 of the 295 samples (23.0%) and rendered different subtype results from those assigned by real-time PCR for 12/295 samples (4.0%), NS5B sequencing and real-time PCR results agreed in all 146 tested cases. Analytical sensitivity of the real-time PCR assay was determined by end-point dilution of the 5000 IU/ml member of the OptiQuant HCV RNA panel. The lower limit of detection was estimated to be 125 IU/ml for genotype 3a, 250 IU/ml for genotypes 1b and 2b, and 500 IU/ml for genotype 1a. Conclusions/Significance: The total time required for performing this assay was two hours, compared to four hours required for LiPA v. 1 after PCR-amplification. Furthermore, the estimated reaction cost was nine times lower than that of available commercial methods in Brazil. Thus, we have developed an efficient, feasible, and affordable method for HCV genotype identification.
Resumo:
The aim of this study was to investigate HIV-1 molecular diversity and the epidemiological profile of HIV-1-infected patients from Ribeirao Preto, Brazil. A nested PCR followed by sequencing of a 302-base pair fragment of the env gene (C2-V3 region) was performed in samples from HIV-1-positive patients. A total of 45 sequences were aligned with final manual adjustments. The phylogenetic analyses showed a higher prevalence of HIV-1 subtype B in the studied population (97.8%) with only one sample yielding an F1 subtype. The viral genotyping prediction showed that CCR5 tropism was the most prevalent in the studied cohort. Geno2pheno analysis showed that R5 and CXCR4 prediction were 69% and 31%, respectively. There was no statistical significance, either in viral load or in CD4(+) T cell count when R5 and X4 prediction groups were compared. Moreover, the GPGR tetramer was the most common V3 loop core motif identified in the HIV-1 strains studied (34.1%) followed by GWGR, identified in 18.1% of the samples. The high level of B subtype in this Brazilian population reinforces the nature of the HIV epidemic in Brazil, and corroborates previous data obtained in the Brazilian HIV-infected population.
Resumo:
This paper presents a new statistical algorithm to estimate rainfall over the Amazon Basin region using the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). The algorithm relies on empirical relationships derived for different raining-type systems between coincident measurements of surface rainfall rate and 85-GHz polarization-corrected brightness temperature as observed by the precipitation radar (PR) and TMI on board the TRMM satellite. The scheme includes rain/no-rain area delineation (screening) and system-type classification routines for rain retrieval. The algorithm is validated against independent measurements of the TRMM-PR and S-band dual-polarization Doppler radar (S-Pol) surface rainfall data for two different periods. Moreover, the performance of this rainfall estimation technique is evaluated against well-known methods, namely, the TRMM-2A12 [ the Goddard profiling algorithm (GPROF)], the Goddard scattering algorithm (GSCAT), and the National Environmental Satellite, Data, and Information Service (NESDIS) algorithms. The proposed algorithm shows a normalized bias of approximately 23% for both PR and S-Pol ground truth datasets and a mean error of 0.244 mm h(-1) ( PR) and -0.157 mm h(-1)(S-Pol). For rain volume estimates using PR as reference, a correlation coefficient of 0.939 and a normalized bias of 0.039 were found. With respect to rainfall distributions and rain area comparisons, the results showed that the formulation proposed is efficient and compatible with the physics and dynamics of the observed systems over the area of interest. The performance of the other algorithms showed that GSCAT presented low normalized bias for rain areas and rain volume [0.346 ( PR) and 0.361 (S-Pol)], and GPROF showed rainfall distribution similar to that of the PR and S-Pol but with a bimodal distribution. Last, the five algorithms were evaluated during the TRMM-Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) 1999 field campaign to verify the precipitation characteristics observed during the easterly and westerly Amazon wind flow regimes. The proposed algorithm presented a cumulative rainfall distribution similar to the observations during the easterly regime, but it underestimated for the westerly period for rainfall rates above 5 mm h(-1). NESDIS(1) overestimated for both wind regimes but presented the best westerly representation. NESDIS(2), GSCAT, and GPROF underestimated in both regimes, but GPROF was closer to the observations during the easterly flow.
Resumo:
Voltage and current waveforms of a distribution or transmission power system are not pure sinusoids. There are distortions in these waveforms that can be represented as a combination of the fundamental frequency, harmonics and high frequency transients. This paper presents a novel approach to identifying harmonics in power system distorted waveforms. The proposed method is based on Genetic Algorithms, which is an optimization technique inspired by genetics and natural evolution. GOOAL, a specially designed intelligent algorithm for optimization problems, was successfully implemented and tested. Two kinds of representations concerning chromosomes are utilized: binary and real. The results show that the proposed method is more precise than the traditional Fourier Transform, especially considering the real representation of the chromosomes.
Resumo:
A technique based on the polymerase chain reaction (PCR) for the specific detection of Phytophthora medicaginis was developed using nucleotide sequence information of the ribosomal DNA (rDNA) regions. The complete IGS 2 region between the 5 S gene of one rDNA repeat and the small subunit of the adjacent repeat was sequenced for P. medicaginis and related species. The entire nucleotide sequence length of the IGS 2 of P. medicaginis was 3566 bp. A pair of oligonucleotide primers (PPED04 and PPED05), which allowed amplification of a specific fragment (364 bp) within the IGS 2 of P. medicaginis using the PCR, was designed. Specific amplification of this fragment from P. medicaginis was highly sensitive, detecting template DNA as low as 4 ng and in a host-pathogen DNA ratio of 1000000:1. Specific PCR amplification using PPED04 and PPED05 was successful in detecting P. medicaginis in lucerne stems infected under glasshouse conditions and field infected lucerne roots. The procedures developed in this work have application to improved identification and detection of a wide range of Phytophthora spp. in plants and soil.
Resumo:
Objective: The study we assessed how often patients who are manifesting a myocardial infarction (MI) would not be considered candidates for intensive lipid-lowering therapy based on the current guidelines. Methods: In 355 consecutive patients manifesting ST elevation MI (STEMI), admission plasma C-reactive protein (CRP) was measured and Framingham risk score (FRS), PROCAM risk score, Reynolds risk score, ASSIGN risk score, QRISK, and SCORE algorithms were applied. Cardiac computed tomography and carotid ultrasound were performed to assess the coronary artery calcium score (CAC), carotid intima-media thickness (cIMT) and the presence of carotid plaques. Results: Less than 50% of STEMI patients would be identified as having high risk before the event by any of these algorithms. With the exception of FRS (9%), all other algorithms would assign low risk to about half of the enrolled patients. Plasma CRP was <1.0 mg/L in 70% and >2 mg/L in 14% of the patients. The average cIMT was 0.8 +/- 0.2 mm and only in 24% of patients was >= 1.0 mm. Carotid plaques were found in 74% of patients. CAC > 100 was found in 66% of patients. Adding CAC >100 plus the presence of carotid plaque, a high-risk condition would be identified in 100% of the patients using any of the above mentioned algorithms. Conclusion: More than half of patients manifesting STEMI would not be considered as candidates for intensive preventive therapy by the current clinical algorithms. The addition of anatomical parameters such as CAC and the presence of carotid plaques can substantially reduce the CVD risk underestimation. (C) 2010 Elsevier Ireland Ltd. All rights reserved.
Resumo:
We suggest a new notion of behaviour preserving transition refinement based on partial order semantics. This notion is called transition refinement. We introduced transition refinement for elementary (low-level) Petri Nets earlier. For modelling and verifying complex distributed algorithms, high-level (Algebraic) Petri nets are usually used. In this paper, we define transition refinement for Algebraic Petri Nets. This notion is more powerful than transition refinement for elementary Petri nets because it corresponds to the simultaneous refinement of several transitions in an elementary Petri net. Transition refinement is particularly suitable for refinement steps that increase the degree of distribution of an algorithm, e.g. when synchronous communication is replaced by asynchronous message passing. We study how to prove that a replacement of a transition is a transition refinement.