924 resultados para Programmable Logic Array
Resumo:
Machine ethics is an interdisciplinary field of inquiry that emerges from the need of imbuing autonomous agents with the capacity of moral decision-making. While some approaches provide implementations in Logic Programming (LP) systems, they have not exploited LP-based reasoning features that appear essential for moral reasoning. This PhD thesis aims at investigating further the appropriateness of LP, notably a combination of LP-based reasoning features, including techniques available in LP systems, to machine ethics. Moral facets, as studied in moral philosophy and psychology, that are amenable to computational modeling are identified, and mapped to appropriate LP concepts for representing and reasoning about them. The main contributions of the thesis are twofold. First, novel approaches are proposed for employing tabling in contextual abduction and updating – individually and combined – plus a LP approach of counterfactual reasoning; the latter being implemented on top of the aforementioned combined abduction and updating technique with tabling. They are all important to model various issues of the aforementioned moral facets. Second, a variety of LP-based reasoning features are applied to model the identified moral facets, through moral examples taken off-the-shelf from the morality literature. These applications include: (1) Modeling moral permissibility according to the Doctrines of Double Effect (DDE) and Triple Effect (DTE), demonstrating deontological and utilitarian judgments via integrity constraints (in abduction) and preferences over abductive scenarios; (2) Modeling moral reasoning under uncertainty of actions, via abduction and probabilistic LP; (3) Modeling moral updating (that allows other – possibly overriding – moral rules to be adopted by an agent, on top of those it currently follows) via the integration of tabling in contextual abduction and updating; and (4) Modeling moral permissibility and its justification via counterfactuals, where counterfactuals are used for formulating DDE.
Resumo:
Application of Experimental Design techniques has proven to be essential in various research fields, due to its statistical capability of processing the effect of interactions among independent variables, known as factors, in a system’s response. Advantages of this methodology can be summarized in more resource and time efficient experimentations while providing more accurate results. This research emphasizes the quantification of 4 antioxidants extraction, at two different concentration, prepared according to an experimental procedure and measured by a Photodiode Array Detector. Experimental planning was made following a Central Composite Design, which is a type of DoE that allows to consider the quadratic component in Response Surfaces, a component that includes pure curvature studies on the model produced. This work was executed with the intention of analyzing responses, peak areas obtained from chromatograms plotted by the Detector’s system, and comprehending if the factors considered – acquired from an extensive literary review – produced the expected effect in response. Completion of this work will allow to take conclusions regarding what factors should be considered for the optimization studies of antioxidants extraction in a Oca (Oxalis tuberosa) matrix.
Resumo:
The amorphous silicon photo-sensor studied in this thesis, is a double pin structure (p(a-SiC:H)-i’(a-SiC:H)-n(a-SiC:H)-p(a-SiC:H)-i(a-Si:H)-n(a-Si:H)) sandwiched between two transparent contacts deposited over transparent glass thus with the possibility of illumination on both sides, responding to wave-lengths from the ultra-violet, visible to the near infrared range. The frontal il-lumination surface, glass side, is used for light signal inputs. Both surfaces are used for optical bias, which changes the dynamic characteristics of the photo-sensor resulting in different outputs for the same input. Experimental studies were made with the photo-sensor to evaluate its applicability in multiplexing and demultiplexing several data communication channels. The digital light sig-nal was defined to implement simple logical operations like the NOT, AND, OR, and complex like the XOR, MAJ, full-adder and memory effect. A pro-grammable pattern emission system was built and also those for the validation and recovery of the obtained signals. This photo-sensor has applications in op-tical communications with several wavelengths, as a wavelength detector and to execute directly logical operations over digital light input signals.
Resumo:
About 90% of breast cancers do not cause or are capable of producing death if detected at an early stage and treated properly. Indeed, it is still not known a specific cause for the illness. It may be not only a beginning, but also a set of associations that will determine the onset of the disease. Undeniably, there are some factors that seem to be associated with the boosted risk of the malady. Pondering the present study, different breast cancer risk assessment models where considered. It is our intention to develop a hybrid decision support system under a formal framework based on Logic Programming for knowledge representation and reasoning, complemented with an approach to computing centered on Artificial Neural Networks, to evaluate the risk of developing breast cancer and the respective Degree-of-Confidence that one has on such a happening.
Resumo:
El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme
Resumo:
Magdeburg, Univ., Fak. für Elektrotechnik und Informationstechnik, Diss., 2011
Resumo:
Multi-core processors is a design philosophy that has become mainstream in scientific and engineering applications. Increasing performance and gate capacity of recent FPGA devices has permitted complex logic systems to be implemented on a single programmable device. By using VHDL here we present an implementation of one multi-core processor by using the PLASMA IP core based on the (most) MIPS I ISA and give an overview of the processor architecture and share theexecution results.
Resumo:
El presente proyecto tenía como objetivo final el desarrollo de un sistema de control basado en Lógica Fuzzy que permita que el proceso de secado tenga una regulación continua y con una menor dependencia de la experiencia del personal experto, evitando además la formación de encostrado. Asimismo, se plantearon una serie de objetivos parciales, cuya consecución permitiría, además de alcanzar el objetivo final descrito, obtener un conocimiento científico adicional. Por ello, a continuación se resumen los resultados en relación con los objetivos parciales propuestos. Como paso previo, antes de abordar los objetivos planteados se diseñó y construyó un equipo experimental de secado, donde se controló de forma precisa la temperatura, la humedad relativa y la velocidad del aire.
Resumo:
Projecte de recerca elaborat a partir d’una estada al Department for Feed and Food Hygiene del National Veterinary Institute, Noruega, entre novembre i desembre del 2006. Els grans de cereal poden estar contaminats amb diferents espècies de Fusarium capaces de produir metabolits secundaris altament tòxics com trichotecenes, fumonisines o moniliformines. La correcta identificació d’aquestes espècies és de gran importància per l’assegurament del risc en l’àmbit de la salut humana i animal. La identificació de Fusarium en base a la seva morfologia requereix coneixements taxonòmics i temps; la majoria dels mètodes moleculars permeten la identificació d’una única espècie diana. Per contra, la tecnologia de microarray ofereix l’anàlisi paral•lel d’un alt nombre de DNA dianes. En aquest treball, s’ha desenvolupat un array per a la identificació de les principals espècies de Fusarium toxigèniques del Nord i Sud d’Europa. S’ha ampliat un array ja existent, per a la detecció de les espècies de Fusarium productores de trichothecene i moniliformina (predominants al Nord d’Europa), amb l’addició de 18 sondes de DNA que permeten identificar les espècies toxigèniques més abundants al Sud d’Europa, les qual produeixen majoritàriament fumonisines. Les sondes de captura han estat dissenyades en base al factor d’elongació translació- 1 alpha (TEF-1alpha). L’anàlisi de les mostres es realitza mitjançant una única PCR que permet amplificar part del TEF-1alpha seguida de la hibridació al xip de Fusarium. Els resultats es visualitzen mitjançant un mètode de detecció colorimètric. El xip de Fusarium desenvolupat pot esdevenir una eina útil i de gran interès per a l’anàlisi de cereals presents en la cadena alimentària.
Resumo:
Candidaemia is the fourth most common cause of bloodstream infection, with a high mortality rate of up to 40%. Identification of host genetic factors that confer susceptibility to candidaemia may aid in designing adjunctive immunotherapeutic strategies. Here we hypothesize that variation in immune genes may predispose to candidaemia. We analyse 118,989 single-nucleotide polymorphisms (SNPs) across 186 loci known to be associated with immune-mediated diseases in the largest candidaemia cohort to date of 217 patients of European ancestry and a group of 11,920 controls. We validate the significant associations by comparison with a disease-matched control group. We observe significant association between candidaemia and SNPs in the CD58 (P = 1.97 × 10(-11); odds ratio (OR) = 4.68), LCE4A-C1orf68 (P = 1.98 × 10(-10); OR = 4.25) and TAGAP (P = 1.84 × 10(-8); OR = 2.96) loci. Individuals carrying two or more risk alleles have an increased risk for candidaemia of 19.4-fold compared with individuals carrying no risk allele. We identify three novel genetic risk factors for candidaemia, which we subsequently validate for their role in antifungal host defence.
Resumo:
Aquest projecte es centra en el disseny d’una antena microstrip per a GNSS. Una antena per a GNSS ha de tenir adaptació de impedància d’entrada i polarització circular a dretes, com a principals especificacions, en el rang de 1.15-1.6 GHz. El tipus d’alimentació d’una antena microstrip amb el major ample de banda d’adaptació és l’alimentació mitjançant acoblament per apertura. Si a l’antena s’introdueixen dos apertures de forma ortogonal, alimentades amb un desfasament de 90º entre elles, s’aconsegueix polarització circular. L’opció de separar les apertures redueix la transferència de potència entre elles, i disminueix el guany de polarització creuada. La xarxa d’alimentació dissenyada és un divisor de Wilkinson amb una línia de λ/4 a la freqüència central, encara que el desfasament als extrems de la banda no sigui de 90º. Com a xarxa d’alimentació es va provar un hibrid de 90º, però l’elevat valor del paràmetre S21 de l’antena impossibilita l’adaptació a l’entrada del hibrid.
Resumo:
Genetic determinants of blood pressure are poorly defined. We undertook a large-scale, gene-centric analysis to identify loci and pathways associated with ambulatory systolic and diastolic blood pressure. We measured 24-hour ambulatory blood pressure in 2020 individuals from 520 white European nuclear families (the Genetic Regulation of Arterial Pressure of Humans in the Community Study) and genotyped their DNA using the Illumina HumanCVD BeadChip array, which contains ≈50 000 single nucleotide polymorphisms in >2000 cardiovascular candidate loci. We found a strong association between rs13306560 polymorphism in the promoter region of MTHFR and CLCN6 and mean 24-hour diastolic blood pressure; each minor allele copy of rs13306560 was associated with 2.6 mm Hg lower mean 24-hour diastolic blood pressure (P=1.2×10(-8)). rs13306560 was also associated with clinic diastolic blood pressure in a combined analysis of 8129 subjects from the Genetic Regulation of Arterial Pressure of Humans in the Community Study, the CoLaus Study, and the Silesian Cardiovascular Study (P=5.4×10(-6)). Additional analysis of associations between variants in gene ontology-defined pathways and mean 24-hour blood pressure in the Genetic Regulation of Arterial Pressure of Humans in the Community Study showed that cell survival control signaling cascades could play a role in blood pressure regulation. There was also a significant overrepresentation of rare variants (minor allele frequency: <0.05) among polymorphisms showing at least nominal association with mean 24-hour blood pressure indicating that a considerable proportion of its heritability may be explained by uncommon alleles. Through a large-scale gene-centric analysis of ambulatory blood pressure, we identified an association of a novel variant at the MTHFR/CLNC6 locus with diastolic blood pressure and provided new insights into the genetic architecture of blood pressure.