902 resultados para Density-based Scanning Algorithm
Resumo:
Non-orthogonal multiple access (NOMA) is emerging as a promising multiple access technology for the fifth generation cellular networks to address the fast growing mobile data traffic. It applies superposition coding in transmitters, allowing simultaneous allocation of the same frequency resource to multiple intra-cell users. Successive interference cancellation is used at the receivers to cancel intra-cell interference. User pairing and power allocation (UPPA) is a key design aspect of NOMA. Existing UPPA algorithms are mainly based on exhaustive search method with extensive computation complexity, which can severely affect the NOMA performance. A fast proportional fairness (PF) scheduling based UPPA algorithm is proposed to address the problem. The novel idea is to form user pairs around the users with the highest PF metrics with pre-configured fixed power allocation. Systemlevel simulation results show that the proposed algorithm is significantly faster (seven times faster for the scenario with 20 users) with a negligible throughput loss than the existing exhaustive search algorithm.
Resumo:
Solubility measurements of quinizarin. (1,4-dihydroxyanthraquinone), disperse red 9 (1-(methylamino) anthraquinone), and disperse blue 14 (1,4-bis(methylamino)anthraquinone) in supercritical carbon dioxide (SC CO2) were carried out in a flow type apparatus, at a temperature range from (333.2 to 393.2) K and at pressures from (12.0 to 40.0) MPa. Mole fraction solubility of the three dyes decreases in the order quinizarin (2.9 x 10(-6) to 2.9.10(-4)), red 9 (1.4 x 10(-6) to 3.2 x 10(-4)), and blue 14 (7.8 x 10(-8) to 2.2 x 10(-5)). Four semiempirical density based models were used to correlatethe solubility of the dyes in the SC CO2. From the correlation results, the total heat of reaction, heat of vaporization plus the heat of solvation of the solute, were calculated and compared with the results presented in the literature. The solubilities of the three dyes were correlated also applying the Soave-Redlich-Kwong cubic equation of state (SRK CEoS) with classical mixing rules, and the physical properties required for the modeling were estimated and reported.
Resumo:
Swarm Intelligence (SI) is the property of a system whereby the collective behaviors of (unsophisticated) agents interacting locally with their environment cause coherent functional global patterns to emerge. Particle swarm optimization (PSO) is a form of SI, and a population-based search algorithm that is initialized with a population of random solutions, called particles. These particles are flying through hyperspace and have two essential reasoning capabilities: their memory of their own best position and knowledge of the swarm's best position. In a PSO scheme each particle flies through the search space with a velocity that is adjusted dynamically according with its historical behavior. Therefore, the particles have a tendency to fly towards the best search area along the search process. This work proposes a PSO based algorithm for logic circuit synthesis. The results show the statistical characteristics of this algorithm with respect to number of generations required to achieve the solutions. It is also presented a comparison with other two Evolutionary Algorithms, namely Genetic and Memetic Algorithms.
Resumo:
A procura de padrões nos dados de modo a formar grupos é conhecida como aglomeração de dados ou clustering, sendo uma das tarefas mais realizadas em mineração de dados e reconhecimento de padrões. Nesta dissertação é abordado o conceito de entropia e são usados algoritmos com critérios entrópicos para fazer clustering em dados biomédicos. O uso da entropia para efetuar clustering é relativamente recente e surge numa tentativa da utilização da capacidade que a entropia possui de extrair da distribuição dos dados informação de ordem superior, para usá-la como o critério na formação de grupos (clusters) ou então para complementar/melhorar algoritmos existentes, numa busca de obtenção de melhores resultados. Alguns trabalhos envolvendo o uso de algoritmos baseados em critérios entrópicos demonstraram resultados positivos na análise de dados reais. Neste trabalho, exploraram-se alguns algoritmos baseados em critérios entrópicos e a sua aplicabilidade a dados biomédicos, numa tentativa de avaliar a adequação destes algoritmos a este tipo de dados. Os resultados dos algoritmos testados são comparados com os obtidos por outros algoritmos mais “convencionais" como o k-médias, os algoritmos de spectral clustering e um algoritmo baseado em densidade.
Resumo:
The solubilities of two C-tetraalkylcalix[4]resorcinarenes, namely C-tetramethylcalix[4]resorcinarene and C-tetrapentylcalix[4]resorcinarene, in supercritical carbon dioxide (SCCO2) were measured in a flow-type apparatus at a temperature range from (313.2 to 333.2) K and at pressures from (12.0 to 35.0) MPa. The C-tetraalkylcalix[4]resorcinarenes were synthesized applying our optimized procedure and fully characterized by means of gel permeation chromatography, infrared and nuclear magnetic resonance spectroscopy. The solubilities of the C-tetraalkylcalix[4]resorcinarenes in SCCO2 were determined by analysis of the extracts obtained by HPLC with ultraviolet (UV) detection methodology adapted by our team. Four semiempirical density-based models, and the SoaveRedlichKwong cubic equation of state (SRK CEoS) with classical mixing rules, were applied to correlate the solubility of the calix[4]resorcinarenes in the SC CO2. The physical properties required for the modeling were estimated and reported.
Resumo:
Dissertation presented to obtain the degree of Doctor of Philosophy in Electrical Engineering, speciality on Perceptional Systems, by the Universidade Nova de Lisboa, Faculty of Sciences and Technology
Resumo:
Dissertation to obtain the degree of Doctor of Philosophy in Biomedical Engineering
Resumo:
In recent years there has been a growing interest in developing news solutions for more ecologic and efficient construction, including natural, renewable and local materials, thus contributing in the search for more efficient, economic and environmentally friendly construction. Several authors have assessed the possibility of using various agricultural sub products or wastes, as part of the effort of the scientific community to find alternative and more ecologic construction materials. Corn cob is an agricultural waste from a very important worldwide crop. Natural glues are made from natural materials, non-mineral, that can be used as such or after some modifications to achieve the behaviour and performance required. Two examples of these natural glues are casein and wheat flour-based glues that were used in the present study. Boards with different compositions were manufactured, having as variables the type of glue, the dimension of the corn cob particles and the features of the pressing process. The tests boards were characterized with physical and mechanical tests, such as thermal conductivity (λ) with a ISOMET 2104 and 60 mm diameter contact probe, density (ρ) based on EN 1602:2013, surface hardness (SH) with a PCE Shore A durometer, surface resistance (SR) with a PROCEQ PT pendular sclerometer, bending behaviour (σ) based on EN 12089:2013, compression behaviour (σ10) based on EN 826:2013 and resilience (R) based on EN 1094-1:2008, with a Zwick Rowell bending equipment with 2 kN and 50 kN load cells (Fig. 1), dynamic modulus of elasticity (Ed) with a Zeus Resonance Meter equipment (Fig. 5) based on NP EN 14146:2006 and water vapour permeability (δ) based on EN 12086:2013. The various boards produced were characterized according to the tests and the ones with the best results were C8_c8 (casein glue, grain size 2,38-4,76 mm, cold pressing for 8 hours), C8_c4 (casein glue, grain size 2,38-4,76 mm, cold pressing for 4 hours), F8_h0.5 (wheat flour glue, grain size 2,38-4,76 mm, hot pressing for 0,5 hours), FEV8_h0.5 (wheat flour, egg white and vinegar glue, grain size 2,38-4,76 mm, hot pressing for 0,5 hours) and FEVH68_c4 (wheat flour, egg white, vinegar and 6 g of sodium hydroxide glue, grain size 2,38-4,76 mm, cold pressing for 4 hours). Taking into account the various boards produced and respective test results the type of glue and the pressure and pressing time are very important factors which strongly influence the final product. The results obtained confirmed the initial hypotheses that these boards have potential as a thermal and, eventually, acoustic insulation material, to use as coating or intermediate layer on walls, floors or false ceilings. This type of board has a high mechanical resistance when compared with traditional insulating materials.The integrity of these boards seems to be maintained even in higher humidity environments. However, due to biological susceptibility and sensitivity to water, they would be more adequate for application in dry interior conditions.
Resumo:
The use of the Internet now has a specific purpose: to find information. Unfortunately, the amount of data available on the Internet is growing exponentially, creating what can be considered a nearly infinite and ever-evolving network with no discernable structure. This rapid growth has raised the question of how to find the most relevant information. Many different techniques have been introduced to address the information overload, including search engines, Semantic Web, and recommender systems, among others. Recommender systems are computer-based techniques that are used to reduce information overload and recommend products likely to interest a user when given some information about the user's profile. This technique is mainly used in e-Commerce to suggest items that fit a customer's purchasing tendencies. The use of recommender systems for e-Government is a research topic that is intended to improve the interaction among public administrations, citizens, and the private sector through reducing information overload on e-Government services. More specifically, e-Democracy aims to increase citizens' participation in democratic processes through the use of information and communication technologies. In this chapter, an architecture of a recommender system that uses fuzzy clustering methods for e-Elections is introduced. In addition, a comparison with the smartvote system, a Web-based Voting Assistance Application (VAA) used to aid voters in finding the party or candidate that is most in line with their preferences, is presented.
Resumo:
The progressive development of Alzheimer's disease (AD)-related lesions such as neurofibrillary tangles,amyloid deposits and synaptic loss within the cerebral cortex is a main event of brain aging.Recent neuropathologic studies strongly suggested that the clinical diagnosis of dementia depends more on the severity and topography of pathologic changes than on the presence of a qualitative marker. However, several methodological problems such as selection biases, case-control design,density-based measures, and masking effects of concomitant pathologies should be taken into account when interpreting these data. In last years, the use of stereologic counting permitted to define reliably the cognitive impact of AD lesions in the human brain. Unlike fibrillar amyloid deposits that are poorly or not related to the dementia severity, the use of this method documented that total neurofibrillary tangles and neuron numbers in the CA1 field are the best correlates of cognitive deterioration in brain aging. Loss of dendritic spines in neocortical but not hippocampal areas has a modest but independent contribution to dementia. In contrast, the importance of early dendritic and axonal tau-related pathologic changes such as neuropil threads remains doubtful. Despite these progresses, neuronal pathology and synaptic loss in cases with pure AD pathology cannot explain more than 50% of clinical severity. The present review discusses the complex structure/function relationships in brain aging and AD within the theoretical framework of the functional neuropathology of brain aging.
Resumo:
A pioneer team of students of the University of Girona decided to design and develop an autonomous underwater vehicle (AUV) called ICTINEU-AUV to face the Student Autonomous Underwater Challenge-Europe (SAUC-E). The prototype has evolved from the initial computer aided design (CAD) model to become an operative AUV in the short period of seven months. The open frame and modular design principles together with the compatibility with other robots previously developed at the lab have provided the main design philosophy. Hence, at the robot's core, two networked computers give access to a wide set of sensors and actuators. The Gentoo/Linux distribution was chosen as the onboard operating system. A software architecture based on a set of distributed objects with soft real time capabilities was developed and a hybrid control architecture including mission control, a behavioural layer and a robust map-based localization algorithm made ICTINEU-AUV the winning entry
Resumo:
In this paper we describe a system for underwater navigation with AUVs in partially structured environments, such as dams, ports or marine platforms. An imaging sonar is used to obtain information about the location of planar structures present in such environments. This information is incorporated into a feature-based SLAM algorithm in a two step process: (I) the full 360deg sonar scan is undistorted (to compensate for vehicle motion), thresholded and segmented to determine which measurements correspond to planar environment features and which should be ignored; and (2) SLAM proceeds once the data association is obtained: both the vehicle motion and the measurements whose correct association has been previously determined are incorporated in the SLAM algorithm. This two step delayed SLAM process allows to robustly determine the feature and vehicle locations in the presence of large amounts of spurious or unrelated measurements that might correspond to boats, rocks, etc. Preliminary experiments show the viability of the proposed approach
Resumo:
The prediction of binding modes (BMs) occurring between a small molecule and a target protein of biological interest has become of great importance for drug development. The overwhelming diversity of needs leaves room for docking approaches addressing specific problems. Nowadays, the universe of docking software ranges from fast and user friendly programs to algorithmically flexible and accurate approaches. EADock2 is an example of the latter. Its multiobjective scoring function was designed around the CHARMM22 force field and the FACTS solvation model. However, the major drawback of such a software design lies in its computational cost. EADock dihedral space sampling (DSS) is built on the most efficient features of EADock2, namely its hybrid sampling engine and multiobjective scoring function. Its performance is equivalent to that of EADock2 for drug-like ligands, while the CPU time required has been reduced by several orders of magnitude. This huge improvement was achieved through a combination of several innovative features including an automatic bias of the sampling toward putative binding sites, and a very efficient tree-based DSS algorithm. When the top-scoring prediction is considered, 57% of BMs of a test set of 251 complexes were reproduced within 2 Å RMSD to the crystal structure. Up to 70% were reproduced when considering the five top scoring predictions. The success rate is lower in cross-docking assays but remains comparable with that of the latest version of AutoDock that accounts for the protein flexibility. © 2011 Wiley Periodicals, Inc. J Comput Chem, 2011.
Resumo:
Background: Single Nucleotide Polymorphisms, among other type of sequence variants, constitute key elements in genetic epidemiology and pharmacogenomics. While sequence data about genetic variation is found at databases such as dbSNP, clues about the functional and phenotypic consequences of the variations are generally found in biomedical literature. The identification of the relevant documents and the extraction of the information from them are hampered by the large size of literature databases and the lack of widely accepted standard notation for biomedical entities. Thus, automatic systems for the identification of citations of allelic variants of genes in biomedical texts are required. Results: Our group has previously reported the development of OSIRIS, a system aimed at the retrieval of literature about allelic variants of genes http://ibi.imim.es/osirisform.html. Here we describe the development of a new version of OSIRIS (OSIRISv1.2, http://ibi.imim.es/OSIRISv1.2.html webcite) which incorporates a new entity recognition module and is built on top of a local mirror of the MEDLINE collection and HgenetInfoDB: a database that collects data on human gene sequence variations. The new entity recognition module is based on a pattern-based search algorithm for the identification of variation terms in the texts and their mapping to dbSNP identifiers. The performance of OSIRISv1.2 was evaluated on a manually annotated corpus, resulting in 99% precision, 82% recall, and an F-score of 0.89. As an example, the application of the system for collecting literature citations for the allelic variants of genes related to the diseases intracranial aneurysm and breast cancer is presented. Conclusion: OSIRISv1.2 can be used to link literature references to dbSNP database entries with high accuracy, and therefore is suitable for collecting current knowledge on gene sequence variations and supporting the functional annotation of variation databases. The application of OSIRISv1.2 in combination with controlled vocabularies like MeSH provides a way to identify associations of biomedical interest, such as those that relate SNPs with diseases.
Resumo:
The class of Schoenberg transformations, embedding Euclidean distances into higher dimensional Euclidean spaces, is presented, and derived from theorems on positive definite and conditionally negative definite matrices. Original results on the arc lengths, angles and curvature of the transformations are proposed, and visualized on artificial data sets by classical multidimensional scaling. A distance-based discriminant algorithm and a robust multidimensional centroid estimate illustrate the theory, closely connected to the Gaussian kernels of Machine Learning.