886 resultados para Optimal test set
Resumo:
Se va a realizar un estudio de la codificación de imágenes sobre el estándar HEVC (high-effiency video coding). El proyecto se va a centrar en el codificador híbrido, más concretamente sobre la aplicación de la transformada inversa del coseno que se realiza tanto en codificador como en el descodificador. La necesidad de codificar vídeo surge por la aparición de la secuencia de imágenes como señales digitales. El problema principal que tiene el vídeo es la cantidad de bits que aparecen al realizar la codificación. Como consecuencia del aumento de la calidad de las imágenes, se produce un crecimiento exponencial de la cantidad de información a codificar. La utilización de las transformadas al procesamiento digital de imágenes ha aumentado a lo largo de los años. La transformada inversa del coseno se ha convertido en el método más utilizado en el campo de la codificación de imágenes y video. Las ventajas de la transformada inversa del coseno permiten obtener altos índices de compresión a muy bajo coste. La teoría de las transformadas ha mejorado el procesamiento de imágenes. En la codificación por transformada, una imagen se divide en bloques y se identifica cada imagen a un conjunto de coeficientes. Esta codificación se aprovecha de las dependencias estadísticas de las imágenes para reducir la cantidad de datos. El proyecto realiza un estudio de la evolución a lo largo de los años de los distintos estándares de codificación de video. Se analiza el codificador híbrido con más profundidad así como el estándar HEVC. El objetivo final que busca este proyecto fin de carrera es la realización del núcleo de un procesador específico para la ejecución de la transformada inversa del coseno en un descodificador de vídeo compatible con el estándar HEVC. Es objetivo se logra siguiendo una serie de etapas, en las que se va añadiendo requisitos. Este sistema permite al diseñador hardware ir adquiriendo una experiencia y un conocimiento más profundo de la arquitectura final. ABSTRACT. A study about the codification of images based on the standard HEVC (high-efficiency video coding) will be developed. The project will be based on the hybrid encoder, in particular, on the application of the inverse cosine transform, which is used for the encoder as well as for the decoder. The necessity of encoding video arises because of the appearance of the sequence of images as digital signals. The main problem that video faces is the amount of bits that appear when making the codification. As a consequence of the increase of the quality of the images, an exponential growth on the quantity of information that should be encoded happens. The usage of transforms to the digital processing of images has increased along the years. The inverse cosine transform has become the most used method in the field of codification of images and video. The advantages of the inverse cosine transform allow to obtain high levels of comprehension at a very low price. The theory of the transforms has improved the processing of images. In the codification by transform, an image is divided in blocks and each image is identified to a set of coefficients. This codification takes advantage of the statistic dependence of the images to reduce the amount of data. The project develops a study of the evolution along the years of the different standards in video codification. In addition, the hybrid encoder and the standard HEVC are analyzed more in depth. The final objective of this end of degree project is the realization of the nucleus from a specific processor for the execution of the inverse cosine transform in a decoder of video that is compatible with the standard HEVC. This objective is reached following a series of stages, in which requirements are added. This system allows the hardware designer to acquire a deeper experience and knowledge of the final architecture.
Resumo:
The inverter in a photovoltaic system assures two essential functions. The first is to track the maximum power point of the system IV curve throughout variable environmental conditions. The second is to convert DC power delivered by the PV panels into AC power. Nowadays, in order to qualify inverters, manufacturers and certifying organisms use mainly European and/or CEC efficiency standards. The question arises if these are still representative of CPV system behaviour. We propose to use a set of CPV – specific weighted average and a representative dynamic response to have a better determination of the static and dynamic MPPT efficiencies. Four string-sized commercial inverters used in real CPV plants have been tested.
Resumo:
Esta dissertação visa deslumbrar uma análise macroeconômica do Brasil, especialmente no que se refere à relação dos índices mensais dos volumes das exportações e das importações com os volumes mensais do PIB, da Taxa SELIC e as Taxas de Câmbio, conforme dados coletados no período de janeiro de 2004 a dezembro de 2014, através de pesquisa literária referente aos históricos sobre cada conceito envolvido no âmbito da macroeconomia das varáveis estudadas. Foi realizado um estudo de caso embasado em dados de sites governamentais, no período delimitado, empregando-se o método de regressão linear, com base na Teoria da correlação de Pearson, demonstrando os resultados obtidos no período do estudo para as varáveis estudadas. Desta maneira, conseguiu-se estudar e analisar como as variáveis dependentes (resposta): volume das exportações e volume das importações estão relacionadas com as varáveis independentes (explicativas): PIB, Taxa Selic e taxa de Câmbio. Os resultados apurados no presente estudo permitem identificar que existe correlação moderada e negativa, quando analisadas a Taxa Selic e a Taxa de Câmbio com os volumes das exportações e das importações, enquanto o PIB apresenta correlação forte e positiva na análise com os volumes das exportações e das importações
Resumo:
A Internet está inserida no cotidiano do indivíduo, e torna-se cada vez mais acessível por meio de diferentes tipos de dispositivos. Com isto, diversos estudos foram realizados com o intuito de avaliar os reflexos do seu uso excessivo na vida pessoal, acadêmica e profissional. Esta dissertação buscou identificar se a perda de concentração e o isolamento social são alguns dos reflexos individuais que o uso pessoal e excessivo de aplicativos de comunicação instantânea podem resultar no ambiente de trabalho. Entre as variáveis selecionadas para avaliar os aspectos do uso excessivo de comunicadores instantâneos tem-se a distração digital, o controle reduzido de impulso, o conforto social e a solidão. Através de uma abordagem de investigação quantitativa, utilizaram-se escalas aplicadas a uma amostra de 283 pessoas. Os dados foram analisados por meio de técnicas estatísticas multivariadas como a Análise Fatorial Exploratória e para auferir a relação entre as variáveis, a Regressão Linear Múltipla. Os resultados deste estudo confirmam que o uso excessivo de comunicadores instantâneos está positivamente relacionado com a perda de concentração, e a variável distração digital exerce uma influência maior do que o controle reduzido de impulso. De acordo com os resultados, não se podem afirmar que a solidão e o conforto social exercem relações com aumento do isolamento social, devido à ausência do relacionamento entre os construtos.
Resumo:
In the maximum parsimony (MP) and minimum evolution (ME) methods of phylogenetic inference, evolutionary trees are constructed by searching for the topology that shows the minimum number of mutational changes required (M) and the smallest sum of branch lengths (S), respectively, whereas in the maximum likelihood (ML) method the topology showing the highest maximum likelihood (A) of observing a given data set is chosen. However, the theoretical basis of the optimization principle remains unclear. We therefore examined the relationships of M, S, and A for the MP, ME, and ML trees with those for the true tree by using computer simulation. The results show that M and S are generally greater for the true tree than for the MP and ME trees when the number of nucleotides examined (n) is relatively small, whereas A is generally lower for the true tree than for the ML tree. This finding indicates that the optimization principle tends to give incorrect topologies when n is small. To deal with this disturbing property of the optimization principle, we suggest that more attention should be given to testing the statistical reliability of an estimated tree rather than to finding the optimal tree with excessive efforts. When a reliability test is conducted, simplified MP, ME, and ML algorithms such as the neighbor-joining method generally give conclusions about phylogenetic inference very similar to those obtained by the more extensive tree search algorithms.
Resumo:
The current phylogenetic hypothesis for the evolution and biogeography of fiddler crabs relies on the assumption that complex behavioral traits are assumed to also be evolutionary derived. Indo-west Pacific fiddler crabs have simpler reproductive social behavior and are more marine and were thought to be ancestral to the more behaviorally complex and more terrestrial American species. It was also hypothesized that the evolution of more complex social and reproductive behavior was associated with the colonization of the higher intertidal zones. Our phylogenetic analysis, based upon a set of independent molecular characters, however, demonstrates how widely entrenched ideas about evolution and biogeography led to a reasonable, but apparently incorrect, conclusion about the evolutionary trends within this pantropical group of crustaceans. Species bearing the set of "derived traits" are phylogenetically ancestral, suggesting an alternative evolutionary scenario: the evolution of reproductive behavioral complexity in fiddler crabs may have arisen multiple times during their evolution. The evolution of behavioral complexity may have arisen by coopting of a series of other adaptations for high intertidal living and antipredator escape. A calibration of rates of molecular evolution from populations on either side of the Isthmus of Panama suggest a sequence divergence rate for 16S rRNA of 0.9% per million years. The divergence between the ancestral clade and derived forms is estimated to be approximately 22 million years ago, whereas the divergence between the American and Indo-west Pacific is estimated to be approximately 17 million years ago.
Resumo:
In the last decades, an increasing interest in the research field of wide bandgap semiconductors was observed, mostly due to the progressive approaching of silicon-based devices to their theoretical limits. 4H-SiC is an example among these, and is a mature compound for applications. The main advantages offered 4H-SiC in comparison with silicon are an higher breakdown field, an higher thermal conductivity, a higher operating temperature, very high hardness and melting point, biocompatibility, but also low switching losses in high frequencies applications and lower on-resistances in unipolar devices. Then, 4H-SiC power devices offer great performance improvement; moreover, they can work in hostile environments where silicon power devices cannot function. Ion implantation technology is a key process in the fabrication of almost all kinds of SiC devices, owing to the advantage of a spatially selective doping. This work is dedicated to the electrical investigation of several differently-processed 4H-SiC ion- implanted samples, mainly through Hall effect and space charge spectroscopy experiments. It was also developed the automatic control (Labview) of several experiments. In the work, the effectiveness of high temperature post-implant thermal treatments (up to 2000°C) were studied and compared considering: (i) different methods, (ii) different temperatures and (iii) different duration of the annealing process. Preliminary p + /n and Schottky junctions were also investigated as simple test devices. 1) Heavy doping by ion implantation of single off-axis 4H-SiC layers The electrical investigation is one of the most important characterization of ion-implanted samples, which must be submitted to mandatory post-implant thermal treatment in order to both (i) recover the lattice after ion bombardment, and (ii) address the implanted impurities into lattice sites so that they can effectively act as dopants. Electrical investigation can give fundamental information on the efficiency of the electrical impurity activation. To understand the results of the research it should be noted that: (a) To realize good ohmic contacts it is necessary to obtain spatially defined highly doped regions, which must have conductivity as low as possible. (b) It has been shown that the electrical activation efficiency and the electrical conductivity increase with the annealing temperature increasing. (c) To maximize the layer conductivity, temperatures around 1700°C are generally used and implantation density high till to 10 21 cm -3 . In this work, an original approach, different from (c), is explored by the using very high annealing temperature, around 2000°C, on samples of Al + -implant concentration of the order of 10 20 cm -3 . Several Al + -implanted 4H-SiC samples, resulting of p-type conductivity, were investigated, with a nominal density varying in the range of about 1-5∙10 20 cm -3 and subjected to two different high temperature thermal treatments. One annealing method uses a radiofrequency heated furnace till to 1950°C (Conventional Annealing, CA), the other exploits a microwave field, providing a fast heating rate up to 2000°C (Micro-Wave Annealing, MWA). In this contest, mainly ion implanted p-type samples were investigated, both off-axis and on-axis <0001> semi-insulating 4H-SiC. Concerning p-type off-axis samples, a high electrical activation of implanted Al (50-70%) and a compensation ratio below 10% were estimated. In the work, the main sample processing parameters have been varied, as the implant temperature, CA annealing duration, and heating/cooling rates, and the best values assessed. MWA method leads to higher hole density and lower mobility than CA in equivalent ion implanted layers, resulting in lower resistivity, probably related to the 50°C higher annealing temperature. An optimal duration of the CA treatment was estimated in about 12-13 minutes. A RT resistivity on the lowest reported in literature for this kind of samples, has been obtained. 2) Low resistivity data: variable range hopping Notwithstanding the heavy p-type doping levels, the carrier density remained less than the critical one required for a semiconductor to metal transition. However, the high carrier densities obtained was enough to trigger a low temperature impurity band (IB) conduction. In the heaviest doped samples, such a conduction mechanism persists till to RT, without significantly prejudice the mobility values. This feature can have an interesting technological fall, because it guarantee a nearly temperature- independent carrier density, it being not affected by freeze-out effects. The usual transport mechanism occurring in the IB conduction is the nearest neighbor hopping: such a regime is effectively consistent with the resistivity temperature behavior of the lowest doped samples. In the heavier doped samples, however, a trend of the resistivity data compatible with a variable range hopping (VRH) conduction has been pointed out, here highlighted for the first time in p-type 4H-SiC. Even more: in the heaviest doped samples, and in particular, in those annealed by MWA, the temperature dependence of the resistivity data is consistent with a reduced dimensionality (2D) of the VRH conduction. In these samples, TEM investigation pointed out faulted dislocation loops in the basal plane, whose average spacing along the c-axis is comparable with the optimal length of the hops in the VRH transport. This result suggested the assignment of such a peculiar behavior to a kind of spatial confinement into a plane of the carrier hops. 3) Test device the p + -n junction In the last part of the work, the electrical properties of 4H-SiC diodes were also studied. In this case, a heavy Al + ion implantation was realized on n-type epilayers, according to the technological process applied for final devices. Good rectification properties was shown from these preliminary devices in their current-voltage characteristics. Admittance spectroscopy and deep level transient spectroscopy measurements showed the presence of electrically active defects other than the dopants ones, induced in the active region of the diodes by ion implantation. A critical comparison with the literature of these defects was performed. Preliminary to such an investigation, it was assessed the experimental set up for the admittance spectroscopy and current-voltage investigation and the automatic control of these measurements.
Resumo:
O problema de Planejamento da Expansão de Sistemas de Distribuição (PESD) visa determinar diretrizes para a expansão da rede considerando a crescente demanda dos consumidores. Nesse contexto, as empresas distribuidoras de energia elétrica têm o papel de propor ações no sistema de distribuição com o intuito de adequar o fornecimento da energia aos padrões exigidos pelos órgãos reguladores. Tradicionalmente considera-se apenas a minimização do custo global de investimento de planos de expansão, negligenciando-se questões de confiabilidade e robustez do sistema. Como consequência, os planos de expansão obtidos levam o sistema de distribuição a configurações que são vulneráveis a elevados cortes de carga na ocorrência de contingências na rede. Este trabalho busca a elaboração de uma metodologia para inserir questões de confiabilidade e risco ao problema PESD tradicional, com o intuito de escolher planos de expansão que maximizem a robustez da rede e, consequentemente, atenuar os danos causados pelas contingências no sistema. Formulou-se um modelo multiobjetivo do problema PESD em que se minimizam dois objetivos: o custo global (que incorpora custo de investimento, custo de manutenção, custo de operação e custo de produção de energia) e o risco de implantação de planos de expansão. Para ambos os objetivos, são formulados modelos lineares inteiros mistos que são resolvidos utilizando o solver CPLEX através do software GAMS. Para administrar a busca por soluções ótimas, optou-se por programar em linguagem C++ dois Algoritmos Evolutivos: Non-dominated Sorting Genetic Algorithm-2 (NSGA2) e Strength Pareto Evolutionary Algorithm-2 (SPEA2). Esses algoritmos mostraram-se eficazes nessa busca, o que foi constatado através de simulações do planejamento da expansão de dois sistemas testes adaptados da literatura. O conjunto de soluções encontradas nas simulações contém planos de expansão com diferentes níveis de custo global e de risco de implantação, destacando a diversidade das soluções propostas. Algumas dessas topologias são ilustradas para se evidenciar suas diferenças.
Resumo:
The current study tested two competing models of Attention-Deficit/Hyperactivity Disorder (AD/HD), the inhibition and state regulation theories, by conducting fine-grained analyses of the Stop-Signal Task and another putative measure of behavioral inhibition, the Gordon Continuous Performance Test (G-CPT), in a large sample of children and adolescents. The inhibition theory posits that performance on these tasks reflects increased difficulties for AD/HD participants to inhibit prepotent responses. The model predicts that putative stop-signal reaction time (SSRT) group differences on the Stop-Signal Task will be primarily related to AD/HD participants requiring more warning than control participants to inhibit to the stop-signal and emphasizes the relative importance of commission errors, particularly "impulsive" type commissions, over other error types on the G-CPT. The state regulation theory, on the other hand, proposes response variability due to difficulties maintaining an optimal state of arousal as the primary deficit in AD/HD. This model predicts that SSRT differences will be more attributable to slower and/or more variable reaction time (RT) in the AD/HD group, as opposed to reflecting inhibitory deficits. State regulation assumptions also emphasize the relative importance of omission errors and "slow processing" type commissions over other error types on the G-CPT. Overall, results of Stop-Signal Task analyses were more supportive of state regulation predictions and showed that greater response variability (i.e., SDRT) in the AD/HD group was not reducible to slow mean reaction time (MRT) and that response variability made a larger contribution to increased SSRT in the AD/HD group than inhibitory processes. Examined further, ex-Gaussian analyses of Stop-Signal Task go-trial RT distributions revealed that increased variability in the AD/HD group was not due solely to a few excessively long RTs in the tail of the AD/HD distribution (i.e., tau), but rather indicated the importance of response variability throughout AD/HD group performance on the Stop-Signal Task, as well as the notable sensitivity of ex-Gaussian analyses to variability in data screening procedures. Results of G-CPT analyses indicated some support for the inhibition model, although error type analyses failed to further differentiate the theories. Finally, inclusion of primary variables of interest in exploratory factor analysis with other neurocognitive predictors of AD/HD indicated response variability as a separable construct and further supported its role in Stop-Signal Task performance. Response variability did not, however, make a unique contribution to the prediction of AD/HD symptoms beyond measures of motor processing speed in multiple deficit regression analyses. Results have implications for the interpretation of the processes reflected in widely-used variables in the AD/HD literature, as well as for the theoretical understanding of AD/HD.
Resumo:
This paper introduces a novel MILP approach for the design of distillation columns sequences of zeotropic mixtures that explicitly include from conventional to fully thermally coupled sequences and divided wall columns with a single wall. The model is based on the use of two superstructure levels. In the upper level a superstructure that includes all the basic sequences of separation tasks is postulated. The lower level is an extended tree that explicitly includes different thermal states and compositions of the feed to a given separation task. In that way, it is possible to a priori optimize all the possible separation tasks involved in the superstructure. A set of logical relationships relates the feasible sequences with the optimized tasks in the extended tree resulting in a MILP to select the optimal sequence. The performance of the model in terms of robustness and computational time is illustrated with several examples.
Resumo:
From a set of gonioapparent automotive samples from different manufacturers we selected 28 low-chroma color pairs with relatively small color differences predominantly in lightness. These color pairs were visually assessed with a gray scale at six different viewing angles by a panel of 10 observers. Using the Standardized Residual Sum of Squares (STRESS) index, the results of our visual experiment were tested against predictions made by 12 modern color-difference formulas. From a weighted STRESS index accounting for the uncertainty in visual assessments, the best prediction of our whole experiment was achieved using AUDI2000, CAM02-SCD, CAM02-UCS and OSA-GP-Euclidean color-difference formulas, which were no statistically significant different among them. A two-step optimization of the original AUDI2000 color-difference formula resulted in a modified AUDI2000 formula which performed both, significantly better than the original formula and below the experimental inter-observer variability. Nevertheless the proposal of a new revised AUDI2000 color-difference formula requires additional experimental data.
Resumo:
This paper proposes an adaptive algorithm for clustering cumulative probability distribution functions (c.p.d.f.) of a continuous random variable, observed in different populations, into the minimum homogeneous clusters, making no parametric assumptions about the c.p.d.f.’s. The distance function for clustering c.p.d.f.’s that is proposed is based on the Kolmogorov–Smirnov two sample statistic. This test is able to detect differences in position, dispersion or shape of the c.p.d.f.’s. In our context, this statistic allows us to cluster the recorded data with a homogeneity criterion based on the whole distribution of each data set, and to decide whether it is necessary to add more clusters or not. In this sense, the proposed algorithm is adaptive as it automatically increases the number of clusters only as necessary; therefore, there is no need to fix in advance the number of clusters. The output of the algorithm are the common c.p.d.f. of all observed data in the cluster (the centroid) and, for each cluster, the Kolmogorov–Smirnov statistic between the centroid and the most distant c.p.d.f. The proposed algorithm has been used for a large data set of solar global irradiation spectra distributions. The results obtained enable to reduce all the information of more than 270,000 c.p.d.f.’s in only 6 different clusters that correspond to 6 different c.p.d.f.’s.
Resumo:
The use of 3D data in mobile robotics applications provides valuable information about the robot’s environment. However usually the huge amount of 3D information is difficult to manage due to the fact that the robot storage system and computing capabilities are insufficient. Therefore, a data compression method is necessary to store and process this information while preserving as much information as possible. A few methods have been proposed to compress 3D information. Nevertheless, there does not exist a consistent public benchmark for comparing the results (compression level, distance reconstructed error, etc.) obtained with different methods. In this paper, we propose a dataset composed of a set of 3D point clouds with different structure and texture variability to evaluate the results obtained from 3D data compression methods. We also provide useful tools for comparing compression methods, using as a baseline the results obtained by existing relevant compression methods.
Resumo:
In this CEPS Commentary, the former Irish Prime Minister calls the precedents being set in the Cypriot banking case “troubling” and reflective of a lack of clarity and consistency of thought by both the eurozone Finance Ministers and the European Commission. He welcomes the rejection of the deal by the Cypriot Parliament as it now gives eurozone policy-makers a chance to think again about the underlying philosophy of their approach to the financial crisis.
Resumo:
From the Introduction. With the results of its asset quality review (AQR), to be published on 26 October 2014, the European Central Bank intends to provide clarity on the shape of the 120 banks it will supervise in the eurozone, and it may request a series of follow-up actions before assuming its new set of tasks under the Single Supervisory Mechanism (SSM) Regulation in November. On the same day, the European Banking Authority (EBA) will also be publishing the results of its stress test, covering 123 banks across 22 European Economic Area (EEA) countries. For the ECB, it will be a matter of setting the standard for its future task, whereas EBA, seeks to restore the confidence it lost in the 2011 stress test and 2012 capital exercise. Both institutions will need to indicate how they will cooperate in the future in these tasks, and through enhanced disclosure, strengthen the confidence in the European banking system.