991 resultados para Algoritmo DSM
Resumo:
Quizás el Código Morse, inventado en 1838 para su uso en la telegrafía, es uno de los primeros ejemplos de la utilización práctica de la compresión de datos [1], donde las letras más comunes del alfabeto son codificadas con códigos más cortos que las demás. A partir de 1940 y tras el desarrollo de la teoría de la información y la creación de los primeros ordenadores, la compresión de la información ha sido un reto constante y fundamental entre los campos de trabajo de investigadores de todo tipo. Cuanto mayor es nuestra comprensión sobre el significado de la información, mayor es nuestro éxito comprimiéndola. En el caso de la información multimedia, su naturaleza permite la compresión con pérdidas, alcanzando así cotas de compresión imposibles para los algoritmos sin pérdidas. Estos “recientes” algoritmos con pérdidas han estado mayoritariamente basados en transformación de la información al dominio de la frecuencia y en la eliminación de parte de la información en dicho dominio. Transformar al dominio de la frecuencia posee ventajas pero también involucra unos costes computacionales inevitables. Esta tesis presenta un nuevo algoritmo de compresión multimedia llamado “LHE” (Logarithmical Hopping Encoding) que no requiere transformación al dominio de la frecuencia, sino que trabaja en el dominio del espacio. Esto lo convierte en un algoritmo lineal de reducida complejidad computacional. Los resultados del algoritmo son prometedores, superando al estándar JPEG en calidad y velocidad. Para ello el algoritmo utiliza como base la respuesta fisiológica del ojo humano ante el estímulo luminoso. El ojo, al igual que el resto de los sentidos, responde al logaritmo de la señal de acuerdo a la ley de Weber. El algoritmo se compone de varias etapas. Una de ellas es la medición de la “Relevancia Perceptual”, una nueva métrica que nos va a permitir medir la relevancia que tiene la información en la mente del sujeto y en base a la misma, degradar en mayor o menor medida su contenido, a través de lo que he llamado “sub-muestreado elástico”. La etapa de sub-muestreado elástico constituye una nueva técnica sin precedentes en el tratamiento digital de imágenes. Permite tomar más o menos muestras en diferentes áreas de una imagen en función de su relevancia perceptual. En esta tesis se dan los primeros pasos para la elaboración de lo que puede llegar a ser un nuevo formato estándar de compresión multimedia (imagen, video y audio) libre de patentes y de alto rendimiento tanto en velocidad como en calidad. ABSTRACT The Morse code, invented in 1838 for use in telegraphy, is one of the first examples of the practical use of data compression [1], where the most common letters of the alphabet are coded shorter than the rest of codes. From 1940 and after the development of the theory of information and the creation of the first computers, compression of information has been a constant and fundamental challenge among any type of researchers. The greater our understanding of the meaning of information, the greater our success at compressing. In the case of multimedia information, its nature allows lossy compression, reaching impossible compression rates compared with lossless algorithms. These "recent" lossy algorithms have been mainly based on information transformation to frequency domain and elimination of some of the information in that domain. Transforming the frequency domain has advantages but also involves inevitable computational costs. This thesis introduces a new multimedia compression algorithm called "LHE" (logarithmical Hopping Encoding) that does not require transformation to frequency domain, but works in the space domain. This feature makes LHE a linear algorithm of reduced computational complexity. The results of the algorithm are promising, outperforming the JPEG standard in quality and speed. The basis of the algorithm is the physiological response of the human eye to the light stimulus. The eye, like other senses, responds to the logarithm of the signal according with Weber law. The algorithm consists of several stages. One is the measurement of "perceptual relevance," a new metric that will allow us to measure the relevance of information in the subject's mind and based on it; degrade accordingly their contents, through what I have called "elastic downsampling". Elastic downsampling stage is an unprecedented new technique in digital image processing. It lets take more or less samples in different areas of an image based on their perceptual relevance. This thesis introduces the first steps for the development of what may become a new standard multimedia compression format (image, video and audio) free of patents and high performance in both speed and quality.
Resumo:
El siguiente proyecto versa sobre la programación en lenguaje java del algoritmo de humanización MIDI desarrollado por Jorge Grundman en su tesis La Humanización de la Interpretación Virtual: Tres ejemplos significativos de la obra de Chopin. Este algoritmo, denominado Zig-Zag tiene como finalidad lograr que una partitura interpretada por un ordenador tenga unas características similares a la lectura a primera vista de la misma por un pianista. Para ello, basa su funcionamiento en una aleatorización del tempo en base a una serie de parámetros, a una modificación de la dinámica acorde a la modificación de tempo y a una segunda aleatorización para cada figura de la partitura. Este algoritmo tiene un gran campo de aplicación como complemento a los diversos secuenciadores y editores de partituras que existen en la actualidad, proporcionando nuevas características a los mismos. La programación del algoritmo se ha llevado a cabo empleando el Java SDK (Standard Developement Kit) 7 y las herramientas que proporciona esta plataforma para el manejo y modificación de los mensajes MIDI. ABSTRACT. The next project is about the programming in Java language of the MIDI humanization algorithm developed by Jorge Grundman in his thesis La Humanización de la Interpretación Virtual: Tres ejemplos significativos de la obra de Chopin. This algorithm, called Zig-Zag aims to have similar characteristics in a score performed by a computer than in the sight reading by a pianist. To this end, it bases its process in a randomization of the tempo from several parameters, a modification of the dynamic according to the change of tempo and a second randomization for each figure in the score. This algorithm has a big scope of application as complement for the different sequencers and score editors that already exist, providing new features to them. The algorithm has been programmed using the Java SDK (Standard Development Kit) 7 and the tools that this platform provides to handle and modify MIDI messages.
Resumo:
Propõe-se método novo e completo para análise de acetona em ar exalado envolvendo coleta com pré-concentração em água, derivatização química e determinação eletroquímica assistida por novo algoritmo de processamento de sinais. Na literatura recente a acetona expirada vem sendo avaliada como biomarcador para monitoramento não invasivo de quadros clínicos como diabetes e insuficiência cardíaca, daí a importância da proposta. Entre as aminas que reagem com acetona para formar iminas eletroativas, estudadas por polarografia em meados do século passado, a glicina apresentou melhor conjunto de características para a definição do método de determinação por voltametria de onda quadrada sem a necessidade de remoção de oxigênio (25 Hz, amplitude de 20 mV, incremento de 5 mV, eletrodo de gota de mercúrio). O meio reacional, composto de glicina (2 mol·L-1) em meio NaOH (1 mol·L-1), serviu também de eletrólito e o pico de redução da imina em -1,57 V vs. Ag|AgCl constituiu o sinal analítico. Para tratamento dos sinais, foi desenvolvido e avaliado um algoritmo inovador baseado em interpolação de linha base por ajuste de curvas de Bézier e ajuste de gaussiana ao pico. Essa combinação permitiu reconhecimento e quantificação de picos relativamente baixos e largos sobre linha com curvatura acentuada e ruído, situação em que métodos convencionais falham e curvas do tipo spline se mostraram menos apropriadas. A implementação do algoritmo (disponível em http://github.com/batistagl/chemapps) foi realizada utilizando programa open source de álgebra matricial integrado diretamente com software de controle do potenciostato. Para demonstrar a generalidade da extensão dos recursos nativos do equipamento mediante integração com programação externa em linguagem Octave (open source), implementou-se a técnica da cronocoulometria tridimensional, com visualização de resultados já tratados em projeções de malha de perspectiva 3D sob qualquer ângulo. A determinação eletroquímica de acetona em fase aquosa, assistida pelo algoritmo baseado em curvas de Bézier, é rápida e automática, tem limite de detecção de 3,5·10-6 mol·L-1 (0,2 mg·L-1) e faixa linear que atende aos requisitos da análise em ar exalado. O acetaldeído, comumente presente em ar exalado, em especial, após consumo de bebidas alcoólicas, dá origem a pico voltamétrico em -1,40 V, contornando interferência que prejudica vários outros métodos publicados na literatura e abrindo possibilidade de determinação simultânea. Resultados obtidos com amostras reais são concordantes com os obtidos por método espectrofotométrico, em uso rotineiro desde o seu aperfeiçoamento na dissertação de mestrado do autor desta tese. Em relação à dissertação, também se otimizou a geometria do dispositivo de coleta, de modo a concentrar a acetona num volume menor de água gelada e prover maior conforto ao paciente. O método completo apresentado, englobando o dispositivo de amostragem aperfeiçoado e o novo e efetivo algoritmo para tratamento automático de sinais voltamétricos, está pronto para ser aplicado. Evolução para um analisador portátil depende de melhorias no limite de detecção e facilidade de obtenção eletrodos sólidos (impressos) com filme de mercúrio, vez que eletrodos de bismuto ou diamante dopado com boro, entre outros, não apresentaram resposta.
Resumo:
Esta tese propõe um modelo de regeneração de energia metroviária, baseado no controle de paradas e partidas do trem ao longo de sua viagem, com o aproveitamento da energia proveniente da frenagem regenerativa no sistema de tração. O objetivo é otimizar o consumo de energia, promover maior eficiência, na perspectiva de uma gestão sustentável. Aplicando o Algoritmo Genético (GA) para obter a melhor configuração de tráfego dos trens, a pesquisa desenvolve e testa o Algoritmo de Controle de Tração para Regeneração de Energia Metroviária (ACTREM), usando a Linguagem de programação C++. Para analisar o desempenho do algoritmo de controle ACTREM no aumento da eficiência energética, foram realizadas quinze simulações da aplicação do ACTREM na linha 4 - Amarela do metrô da cidade de São Paulo. Essas simulações demonstraram a eficiência do ACTREM para gerar, automaticamente, os diagramas horários otimizados para uma economia de energia nos sistemas metroviários, levando em consideração as restrições operacionais do sistema, como capacidade máxima de cada trem, tempo total de espera, tempo total de viagem e intervalo entre trens. Os resultados mostram que o algoritmo proposto pode economizar 9,5% da energia e não provocar impactos relevantes na capacidade de transporte de passageiros do sistema. Ainda sugerem possíveis continuidades de estudos.
Resumo:
Trabalho Final do Curso de Mestrado Integrado em Medicina, Faculdade de Medicina, Universidade de Lisboa, 2014
Resumo:
Aims This paper presents the recommendations, developed from a 3-year consultation process, for a program of research to underpin the development of diagnostic concepts and criteria in the Substance Use Disorders section of the Diagnostic and Statistical Manual of Mental Disorders (DSM) and potentially the relevant section of the next revision of the International Classification of Diseases (ICD). Methods A preliminary list of research topics was developed at the DSM-V Launch Conference in 2004. This led to the presentation of articles on these topics at a specific Substance Use Disorders Conference in February 2005, at the end of which a preliminary list of research questions was developed. This was further refined through an iterative process involving conference participants over the following year. Results Research questions have been placed into four categories: (1) questions that could be addressed immediately through secondary analyses of existing data sets; (2) items likely to require position papers to propose criteria or more focused questions with a view to subsequent analyses of existing data sets; (3) issues that could be proposed for literature reviews, but with a lower probability that these might progress to a data analytic phase; and (4) suggestions or comments that might not require immediate action, but that could be considered by the DSM-V and ICD 11 revision committees as part of their deliberations. Conclusions A broadly based research agenda for the development of diagnostic concepts and criteria for substance use disorders is presented.
Resumo:
Aims paper describes the background to the establishment of the Substance Use Disorders Workgroup, which was charged with developing the research agenda for the development of the next edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM). It summarizes 18 articles that were commissioned to inform that process. Methods A preliminary list of research topics, developed at the DSM-V Launch Conference in 2004, led to the identification of subjects that were subject to formal presentations and detailed discussion at the Substance Use Disorders Conference in February 2005. Results The 18 articles presented in this supplement examine: (1) categorical versus dimensional diagnoses; (2) the neurobiological basis of substance use disorders; (3) social and cultural perspectives; (4) the crosswalk between DSM-IV and the International Classification of Diseases Tenth Revision (ICD-10); (5) comorbidity of substance use disorders and mental health disorders; (6) subtypes of disorders; (7) issues in adolescence; (8) substance-specific criteria; (9) the place of non-substance addictive disorders; and (10) the available research resources. Conclusions In the final paper a broadly based research agenda for the development of diagnostic concepts and criteria for substance use disorders is presented.
Resumo:
Background. We examined whether there are genetic influences on nicotine withdrawal. and whether there are genetic factors specific to nicotine withdrawal, after controlling for factors responsible for risk of progression beyond experimentation with cigarettes and for quantity smoked (average number of cigarettes per day at peak lifetime use). Method. Epidemiologic and genetic analyses were conducted using telephone diagnostic interview data from Young adult Australian twins reporting any cigarette use (3026 women. 2553 men: mean age 30 years). Results. Genetic analysis of the eight symptoms of DSM-IV nicotine withdrawal suggests heritability is intermediate for most symptoms (26-43%). and Similar in men and women. The exceptions were depressed mood upon withdrawal. which had stronger additive genetic influences in men (53%) compared to worrien (29%). and decreased heart rate. which had low heritability (9%). Although prevalence rates were substantlally lower for DSM-IV nicotine withdrawal syndrome (15-9%), which requires impairment. than for the DSM-IV nicotine dependence withdrawal criterion (43.6%), heritability was similar for both measures: as high as 47%. Genetic modeling of smoking more than 1 or 2 cigarettes lifetime ('progression'). qualtity smoked and nicotine withdrawal found significant genetic overlap across all three components of nicotine use/dependence (genetic correlations = 0.53-0.76). Controlling for factors associated with risk of cigarette smoking beyond experimentation and quantity smoked, evidence for genetic influences specific to nicotine withdrawal (up to 23% of total variance) remained. Conclusions. Our results suggest that at least some individuals become 'hooked' or progress in the smoking habit, in part, because of it vulnerability to nicotine withdrawal.
Resumo:
Dynamic spectrum management (DSM) comprises a new set of techniques for multiuser power allocation and/or detection in digital subscriber line (DSL) networks. At the Alcatel Research and Innovation Labs, we have recently developed a DSM test bed, which allows the performance of DSM algorithms to be evaluated in practice. With this test bed, we have evaluated the performance of a DSM level-1 algorithm known as iterative water-filling in an ADSL scenario. This paper describes the results of, on the one hand, the performance gains achieved with iterative water-filling, and, on the other hand, the nonstationary noise robustness of DSM-enabled ADSL modems. It will be shown that DSM trades off nonstationary noise robustness for performance improvements. A new bit swap procedure is then introduced to increase the noise robustness when applying DSM.
Resumo:
This thesis investigates the modelling of drying processes for the promotion of market-led Demand Side Management (DSM) as applied to the UK Public Electricity Suppliers. A review of DSM in the electricity supply industry is provided, together with a discussion of the relevant drivers supporting market-led DSM and energy services (ES). The potential opportunities for ES in a fully deregulated energy market are outlined. It is suggested that targeted industrial sector energy efficiency schemes offer significant opportunity for long term customer and supplier benefit. On a process level, industrial drying is highlighted as offering significant scope for the application of energy services. Drying is an energy-intensive process used widely throughout industry. The results of an energy survey suggest that 17.7 per cent of total UK industrial energy use derives from drying processes. Comparison with published work indicates that energy use for drying shows an increasing trend against a background of reducing overall industrial energy use. Airless drying is highlighted as offering potential energy saving and production benefits to industry. To this end, a comprehensive review of the novel airless drying technology and its background theory is made. Advantages and disadvantages of airless operation are defined and the limited market penetration of airless drying is identified, as are the key opportunities for energy saving. Limited literature has been found which details the modelling of energy use for airless drying. A review of drying theory and previous modelling work is made in an attempt to model energy consumption for drying processes. The history of drying models is presented as well as a discussion of the different approaches taken and their relative merits. The viability of deriving energy use from empirical drying data is examined. Adaptive neuro fuzzy inference systems (ANFIS) are successfully applied to the modelling of drying rates for 3 drying technologies, namely convective air, heat pump and airless drying. The ANFIS systems are then integrated into a novel energy services model for the prediction of relative drying times, energy cost and atmospheric carbon dioxide emission levels. The author believes that this work constitutes the first to use fuzzy systems for the modelling of drying performance as an energy services approach to DSM. To gain an insight into the 'real world' use of energy for drying, this thesis presents a unique first-order energy audit of every ceramic sanitaryware manufacturing site in the UK. Previously unknown patterns of energy use are highlighted. Supplementary comments on the timing and use of drying systems are also made. The limitations of such large scope energy surveys are discussed.
Resumo:
Background: Despite initial concerns about the sensitivity of the proposed diagnostic criteria for DSM-5 Autism Spectrum Disorder (ASD; e.g. Gibbs et al., 2012; McPartland et al., 2012), evidence is growing that the DSM-5 criteria provides an inclusive description with both good sensitivity and specificity (e.g. Frazier et al., 2012; Kent, Carrington et al., 2013). The capacity of the criteria to provide high levels of sensitivity and specificity comparable with DSM-IV-TR however relies on careful measurement to ensure that appropriate items from diagnostic instruments map onto the new DSM-5 descriptions.Objectives: To use an existing DSM-5 diagnostic algorithm (Kent, Carrington et .al., 2013) to identify a set of ‘essential’ behaviors sufficient to make a reliable and accurate diagnosis of DSM-5 Autism Spectrum Disorder (ASD) across age and ability level. Methods: Specific behaviors were identified and tested from the recently published DSM-5 algorithm for the Diagnostic Interview for Social and Communication Disorders (DISCO). Analyses were run on existing DISCO datasets, with a total participant sample size of 335. Three studies provided step-by-step development towards identification of a minimum set of items. Study 1 identified the most highly discriminating items (p<.0001). Study 2 used a lower selection threshold than in Study 1 (p<.05) to facilitate better representation of the full DSM-5 ASD profile. Study 3 included additional items previously reported as significantly more frequent in individuals with higher ability. The discriminant validity of all three item sets was tested using Receiver Operating Characteristic curves. Finally, sensitivity across age and ability was investigated in a subset of individuals with ASD (n=190).Results: Study 1 identified an item set (14 items) with good discriminant validity, but which predominantly measured social-communication behaviors (11/14). The Study 2 item set (48 items) better represented the DSM-5 ASD and had good discriminant validity, but the item set lacked sensitivity for individuals with higher ability. The final Study 3 adjusted item set (54 items) improved sensitivity for individuals with higher ability and performance and was comparable to the published DISCO DSM-5 algorithm.Conclusions: This work represents a first attempt to derive a reduced set of behaviors for DSM-5 directly from an existing standardized ASD developmental history interview. Further work involving existing ASD diagnostic tools with community-based and well characterized research samples will be required to replicate these findings and exploit their potential to contribute to a more efficient and focused ASD diagnostic process.
Resumo:
Background Introduction of proposed criteria for DSM-5 Autism Spectrum Disorder (ASD) has raised concerns that some individuals currently meeting diagnostic criteria for Pervasive Developmental Disorder (PDD; DSM-IV-TR/ICD-10) will not qualify for a diagnosis under the proposed changes. To date, reports of sensitivity and specificity of the new criteria have been inconsistent across studies. No study has yet considered how changes at the 'sub domain' level might affect overall sensitivity and specificity, and few have included individuals of different ages and ability levels. Methods A set of DSM-5 ASD algorithms were developed using items from the Diagnostic Interview for Social and Communication Disorders (DISCO). The number of items required for each DSM-5 subdomain was defined either according to criteria specified by DSM-5 (Initial Algorithm), a statistical approach (Youden J Algorithm), or to minimise the number of false positives while maximising sensitivity (Modified Algorithm). The algorithms were designed, tested and compared in two independent samples (Sample 1, N = 82; Sample 2, N = 115), while sensitivity was assessed across age and ability levels in an additional dataset of individuals with an ICD-10 PDD diagnosis (Sample 3, N = 190). Results Sensitivity was highest in the Initial Algorithm, which had the poorest specificity. Although Youden J had excellent specificity, sensitivity was significantly lower than in the Modified Algorithm, which had both good sensitivity and specificity. Relaxing the domain A rules improved sensitivity of the Youden J Algorithm, but it remained less sensitive than the Modified Algorithm. Moreover, this was the only algorithm with variable sensitivity across age. All versions of the algorithm performed well across ability level. Conclusions This study demonstrates that good levels of both sensitivity and specificity can be achieved for a diagnostic algorithm adhering to the DSM-5 criteria that is suitable across age and ability level. © 2013 The Authors. Journal of Child Psychology and Psychiatry © 2013 Association for Child and Adolescent Mental Health.