911 resultados para ChIP-seq


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The demand for biomass for bioenergy has increased rapidly in industrialized countries in the recent years. Biogenic energy carriers are known to reduce CO2 emissions. However, the resource-inefficient production of biomass often caused negative impacts on the environment, e.g. biodiversity losses, nitrate leaching, and erosion. The detrimental effects evolved mainly from annual crops. Therefore, the aim of modern bioenergy cropping systems is to combine yield stability and environmental benefits by the establishment of mixed-cropping systems. A particular emphasis is on perennial crops which are perceived as environmentally superior to annual crops. Agroforestry systems represent such mixed perennial cropping systems and consist of a mix of trees and arable crops or grassland within the same area of land. Agroforestry practices vary across the globe and alley cropping is a type of agroforestry system which is well adapted to the temperate zone, with a high degree of mechanization. Trees are planted in rows and crops are planted in the alleyways, which facilitates their management by machinery. This study was conducted to examine a young alley cropping system of willows and two grassland mixtures for bioenergy provision under temperate climate conditions. The first part of the thesis identified possible competition effects between willows and the two grassland mixtures. Since light seemed to be the factor most affecting the yield performance of the understory in temperate agroforestry systems, a biennial in situ artificial shade experiment was established over a separate clover-grass stand to quantify the effects of shade. Data to possible below- and aboveground interactions among willows and the two grassland mixtures and their effects on productivity, sward composition, and quality were monitored along a tree-grassland interface within the alleys. In the second part, productivity of the alley cropping system was examined on a triennial time frame and compared to separate grassland and willow stands as controls. Three different conversion technologies (combustion of hay, integrated generation of solid fuel and biogas from biomass, whole crop digestion) were applied to grassland biomass as feedstock and analyzed for its energetic potential. The energetic potential of willow wood chips was calculated by applying combustion as conversion technique. Net energy balances of separate grassland stands, agroforestry and pure willow stands evaluated their energy efficiency. Results of the biennial artificial shade experiment showed that severe shade (80 % light reduction) halved grassland productivity on average compared to a non-shaded control. White clover as heliophilous plant responded sensitively to limited radiation and its dry matter contribution in the sward decreased with increasing shade, whereas non-leguminous forbs (mainly segetal species) benefited. Changes in nutritive quality could not be confirmed by this experiment. Through the study on interactions within the alleys of the young agroforestry system it was possible to outline changes of incident light, soil temperature and sward composition of clover-grass along the tree-grassland interface. Nearly no effects of trees on precipitation, soil moisture and understory productivity occurred along the interface during the biennial experiment. Considering the results of the productivity and the net energy yield alley cropping system had lower than pure grassland stands, irrespective of the grassland seed mixture or fertilization, but was higher than that for pure willow stands. The comparison of three different energetic conversion techniques for the grassland biomass showed highest net energy yields for hay combustion, whereas the integrated generation of solid fuel and biogas from biomass (IFBB) and whole crop digestion performed similarly. However, due to the low fuel quality of hay, its direct combustion cannot be recommended as a viable conversion technique, whereas IFBB fuels were of a similar quality to wood chip from willow.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die Miniaturisierung von konventioneller Labor- und Analysetechnik nimmt eine zentrale Rolle im Bereich der allgemeinen Lebenswissenschaften und medizinischen Diagnostik ein. Neuartige und preiswerte Technologieplattformen wie Lab-on-a-Chip (LOC) oder Mikrototalanalysesysteme (µTAS) versprechen insbesondere im Bereich der Individualmedizin einen hohen gesellschaftlichen Nutzen zur frühzeitigen und nichtinvasiven Diagnose krankheitsspezifischer Indikatoren. Durch den patientennahen Einsatz preiswerter und verlässlicher Mikrochips auf Basis hoher Qualitätsstandards entfallen kostspielige und zeitintensive Zentrallaboranalysen, was gleichzeitig Chancen für den globalen Einsatz - speziell in Schwellen- und Entwicklungsländern - bietet. Die technischen Herausforderungen bei der Realisierung moderner LOC-Systeme sind in der kontrollierten und verlässlichen Handhabung kleinster Flüssigkeitsmengen sowie deren diagnostischem Nachweis begründet. In diesem Kontext wird der erfolgreichen Integration eines fernsteuerbaren Transports von biokompatiblen, magnetischen Mikro- und Nanopartikeln eine Schlüsselrolle zugesprochen. Die Ursache hierfür liegt in der vielfältigen Einsetzbarkeit, die durch die einzigartigen Materialeigenschaften begründet sind. Diese reichen von der beschleunigten, aktiven Durchmischung mikrofluidischer Substanzvolumina über die Steigerung der molekularen Interaktionsrate in Biosensoren bis hin zur Isolation und Aufreinigung von krankheitsspezifischen Indikatoren. In der Literatur beschriebene Ansätze basieren auf der dynamischen Transformation eines makroskopischen, zeitabhängigen externen Magnetfelds in eine mikroskopisch veränderliche potentielle Energielandschaft oberhalb magnetisch strukturierter Substrate, woraus eine gerichtete und fernsteuerbare Partikelbewegung resultiert. Zentrale Kriterien, wie die theoretische Modellierung und experimentelle Charakterisierung der magnetischen Feldlandschaft in räumlicher Nähe zur Oberfläche der strukturierten Substrate sowie die theoretische Beschreibung der Durchmischungseffekte, wurden jedoch bislang nicht näher beleuchtet, obwohl diese essentiell für ein detailliertes Verständnis der zu Grunde liegenden Mechanismen und folglich für einen Markteintritt zukünftiger Geräte sind. Im Rahmen der vorgestellten Arbeit wurde daher ein neuartiger Ansatz zur erfolgreichen Integration eines Konzepts zum fernsteuerbaren Transport magnetischer Partikel zur Anwendung in modernen LOC-Systemen unter Verwendung von magnetisch strukturierten Exchange-Bias (EB) Dünnschichtsystemen verfolgt. Die Ergebnisse zeigen, dass sich das Verfahren der ionenbe-schussinduzierten magnetischen Strukturierung (IBMP) von EB-Systemen zur Herstellung von maßgeschneiderten magnetischen Feldlandschaften (MFL) oberhalb der Substratoberfläche, deren Stärke und räumlicher Verlauf auf Nano- und Mikrometerlängenskalen gezielt über die Veränderung der Materialparameter des EB-Systems via IBMP eingestellt werden kann, eignet. Im Zuge dessen wurden erstmals moderne, experimentelle Verfahrenstechniken (Raster-Hall-Sonden-Mikroskopie und rastermagnetoresistive Mikroskopie) in Kombination mit einem eigens entwickelten theoretischen Modell eingesetzt, um eine Abbildung der MFL in unterschiedlichen Abstandsbereichen zur Substratoberfläche zu realisieren. Basierend auf der quantitativen Kenntnis der MFL wurde ein neuartiges Konzept zum fernsteuerbaren Transport magnetischer Partikel entwickelt, bei dem Partikelgeschwindigkeiten im Bereich von 100 µm/s unter Verwendung von externen Magnetfeldstärken im Bereich weniger Millitesla erzielt werden können, ohne den magnetischen Zustand des Substrats zu modifizieren. Wie aus den Untersuchungen hervorgeht, können zudem die Stärke des externen Magnetfelds, die Stärke und der Gradient der MFL, das magnetfeldinduzierte magnetische Moment der Partikel sowie die Größe und der künstlich veränderliche Abstand der Partikel zur Substratoberfläche als zentrale Einflussgrößen zur quantitativen Modifikation der Partikelgeschwindigkeit genutzt werden. Abschließend wurde erfolgreich ein numerisches Simulationsmodell entwickelt, das die quantitative Studie der aktiven Durchmischung auf Basis des vorgestellten Partikeltransportkonzepts von theoretischer Seite ermöglicht, um so gezielt die geometrischen Gegebenheiten der mikrofluidischen Kanalstrukturen auf einem LOC-System für spezifische Anwendungen anzupassen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Signalling off-chip requires significant current. As a result, a chip's power-supply current changes drastically during certain output-bus transitions. These current fluctuations cause a voltage drop between the chip and circuit board due to the parasitic inductance of the power-supply package leads. Digital designers often go to great lengths to reduce this "transmitted" noise. Cray, for instance, carefully balances output signals using a technique called differential signalling to guarantee a chip has constant output current. Transmitted-noise reduction costs Cray a factor of two in output pins and wires. Coding achieves similar results at smaller costs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dynamic power requirement of CMOS circuits is rapidly becoming a major concern in the design of personal information systems and large computers. In this work we present a number of new CMOS logic families, Charge Recovery Logic (CRL) as well as the much improved Split-Level Charge Recovery Logic (SCRL), within which the transfer of charge between the nodes occurs quasistatically. Operating quasistatically, these logic families have an energy dissipation that drops linearly with operating frequency, i.e., their power consumption drops quadratically with operating frequency as opposed to the linear drop of conventional CMOS. The circuit techniques in these new families rely on constructing an explicitly reversible pipelined logic gate, where the information necessary to recover the energy used to compute a value is provided by computing its logical inverse. Information necessary to uncompute the inverse is available from the subsequent inverse logic stage. We demonstrate the low energy operation of SCRL by presenting the results from the testing of the first fully quasistatic 8 x 8 multiplier chip (SCRL-1) employing SCRL circuit techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

General-purpose computing devices allow us to (1) customize computation after fabrication and (2) conserve area by reusing expensive active circuitry for different functions in time. We define RP-space, a restricted domain of the general-purpose architectural space focussed on reconfigurable computing architectures. Two dominant features differentiate reconfigurable from special-purpose architectures and account for most of the area overhead associated with RP devices: (1) instructions which tell the device how to behave, and (2) flexible interconnect which supports task dependent dataflow between operations. We can characterize RP-space by the allocation and structure of these resources and compare the efficiencies of architectural points across broad application characteristics. Conventional FPGAs fall at one extreme end of this space and their efficiency ranges over two orders of magnitude across the space of application characteristics. Understanding RP-space and its consequences allows us to pick the best architecture for a task and to search for more robust design points in the space. Our DPGA, a fine- grained computing device which adds small, on-chip instruction memories to FPGAs is one such design point. For typical logic applications and finite- state machines, a DPGA can implement tasks in one-third the area of a traditional FPGA. TSFPGA, a variant of the DPGA which focuses on heavily time-switched interconnect, achieves circuit densities close to the DPGA, while reducing typical physical mapping times from hours to seconds. Rigid, fabrication-time organization of instruction resources significantly narrows the range of efficiency for conventional architectures. To avoid this performance brittleness, we developed MATRIX, the first architecture to defer the binding of instruction resources until run-time, allowing the application to organize resources according to its needs. Our focus MATRIX design point is based on an array of 8-bit ALU and register-file building blocks interconnected via a byte-wide network. With today's silicon, a single chip MATRIX array can deliver over 10 Gop/s (8-bit ops). On sample image processing tasks, we show that MATRIX yields 10-20x the computational density of conventional processors. Understanding the cost structure of RP-space helps us identify these intermediate architectural points and may provide useful insight more broadly in guiding our continual search for robust and efficient general-purpose computing structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The furious pace of Moore's Law is driving computer architecture into a realm where the the speed of light is the dominant factor in system latencies. The number of clock cycles to span a chip are increasing, while the number of bits that can be accessed within a clock cycle is decreasing. Hence, it is becoming more difficult to hide latency. One alternative solution is to reduce latency by migrating threads and data, but the overhead of existing implementations has previously made migration an unserviceable solution so far. I present an architecture, implementation, and mechanisms that reduces the overhead of migration to the point where migration is a viable supplement to other latency hiding mechanisms, such as multithreading. The architecture is abstract, and presents programmers with a simple, uniform fine-grained multithreaded parallel programming model with implicit memory management. In other words, the spatial nature and implementation details (such as the number of processors) of a parallel machine are entirely hidden from the programmer. Compiler writers are encouraged to devise programming languages for the machine that guide a programmer to express their ideas in terms of objects, since objects exhibit an inherent physical locality of data and code. The machine implementation can then leverage this locality to automatically distribute data and threads across the physical machine by using a set of high performance migration mechanisms. An implementation of this architecture could migrate a null thread in 66 cycles -- over a factor of 1000 improvement over previous work. Performance also scales well; the time required to move a typical thread is only 4 to 5 times that of a null thread. Data migration performance is similar, and scales linearly with data block size. Since the performance of the migration mechanism is on par with that of an L2 cache, the implementation simulated in my work has no data caches and relies instead on multithreading and the migration mechanism to hide and reduce access latencies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the most prominent industrial applications of heat transfer science and engineering has been electronics thermal control. Driven by the relentless increase in spatial density of microelectronic devices, integrated circuit chip powers have risen by a factor of 100 over the past twenty years, with a somewhat smaller increase in heat flux. The traditional approaches using natural convection and forced-air cooling are becoming less viable as power levels increase. This paper provides a high-level overview of the thermal management problem from the perspective of a practitioner, as well as speculation on the prospects for electronics thermal engineering in years to come.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The memory hierarchy is the main bottleneck in modern computer systems as the gap between the speed of the processor and the memory continues to grow larger. The situation in embedded systems is even worse. The memory hierarchy consumes a large amount of chip area and energy, which are precious resources in embedded systems. Moreover, embedded systems have multiple design objectives such as performance, energy consumption, and area, etc. Customizing the memory hierarchy for specific applications is a very important way to take full advantage of limited resources to maximize the performance. However, the traditional custom memory hierarchy design methodologies are phase-ordered. They separate the application optimization from the memory hierarchy architecture design, which tend to result in local-optimal solutions. In traditional Hardware-Software co-design methodologies, much of the work has focused on utilizing reconfigurable logic to partition the computation. However, utilizing reconfigurable logic to perform the memory hierarchy design is seldom addressed. In this paper, we propose a new framework for designing memory hierarchy for embedded systems. The framework will take advantage of the flexible reconfigurable logic to customize the memory hierarchy for specific applications. It combines the application optimization and memory hierarchy design together to obtain a global-optimal solution. Using the framework, we performed a case study to design a new software-controlled instruction memory that showed promising potential.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During the last decade, large and costly instruments are being replaced by system based on microfluidic devices. Microfluidic devices hold the promise of combining a small analytical laboratory onto a chip-sized substrate to identify, immobilize, separate, and purify cells, bio-molecules, toxins, and other chemical and biological materials. Compared to conventional instruments, microfluidic devices would perform these tasks faster with higher sensitivity and efficiency, and greater affordability. Dielectrophoresis is one of the enabling technologies for these devices. It exploits the differences in particle dielectric properties to allow manipulation and characterization of particles suspended in a fluidic medium. Particles can be trapped or moved between regions of high or low electric fields due to the polarization effects in non-uniform electric fields. By varying the applied electric field frequency, the magnitude and direction of the dielectrophoretic force on the particle can be controlled. Dielectrophoresis has been successfully demonstrated in the separation, transportation, trapping, and sorting of various biological particles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fueled by ever-growing genomic information and rapid developments of proteomics–the large scale analysis of proteins and mapping its functional role has become one of the most important disciplines for characterizing complex cell function. For building functional linkages between the biomolecules, and for providing insight into the mechanisms of biological processes, last decade witnessed the exploration of combinatorial and chip technology for the detection of bimolecules in a high throughput and spatially addressable fashion. Among the various techniques developed, the protein chip technology has been rapid. Recently we demonstrated a new platform called “Spacially addressable protein array” (SAPA) to profile the ligand receptor interactions. To optimize the platform, the present study investigated various parameters such as the surface chemistry and role of additives for achieving high density and high-throughput detection with minimal nonspecific protein adsorption. In summary the present poster will address some of the critical challenges in protein micro array technology and the process of fine tuning to achieve the optimum system for solving real biological problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Inmersos en un conflicto armado y guiados por un modelo político que busca aprovechar el boom minero en pro del desarrollo de Colombia, se han potencializado dinámicas ilegales donde los Actores Armados Ilegales (A.A.I), como ‘bandidos estacionarios’, se han ido adaptando a las nuevas dinámicas del mercado donde el lucro, la rentabilidad y la financiación son su objetivo central. En el departamento de Antioquia, esa situación data desde la formación de sus regiones, como son el caso del Bajo Cauca y Nordeste Antioqueño. Empero, lo novedoso en el conflicto armado, y que tiene relación directa con la minería aurífera es, la alta posibilidad que los A.A.I. estén viendo en esta actividad su principal fuente de financiación como consecuencia de dos hechos importantes: 1). El éxito que ha tenido la lucha contra el narcotráfico y por ende, contra los cultivos ilícitos. Y, 2) El elevado precio del oro en el mercado internacional puesto que, se estima que en los próximos años alcance entre $2.000 y $2.107 dólares la onza. En este orden de ideas, el lector encontrará como los actores armados ilegales actúan como bandidos estacionarios que ejerciendo la “minería criminal”, inciden en la política fiscal municipal a partir de la para-tributación o ‘impuesto de protección’, captación de regalías y el lavado de activos; donde el fin es financiar sus actividades criminales, afectando negativamente la política tributaria de los gobiernos municipales que se caracteriza por su baja capacidad de gestión.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El objetivo de esta monografía es examinar la transformación de la doctrina de seguridad de la OTAN en la Post-Guerra Fría y sus efectos en la intervención en la República de Macedonia. La desintegración del bloque soviético implicó la variación en la definición de las amenazas que atentan contra la supervivencia de los países miembro de la Alianza Atlántica. A partir de la década de los noventa, los conflictos de naturaleza interétnica pasaron a formar parte de los riesgos que transgreden la seguridad de los Aliados y la estabilidad del área Euro-Atlántica. Por lo anterior, la OTAN intervino en aquellos Estados en los que prevalecían las confrontaciones armadas interétnicas, como por ejemplo: en Macedonia. Allí, la Alianza Atlántica ejecutó operaciones de gestión de crisis para contrarrestar la amenaza. El fenómeno a estudiar en esta investigación será analizado a partir del Realismo Subalterno y de la Teoría de la Seguridad Colectiva.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El concepto de efectividad en Redes Inter-organizacionales se ha investigado poco a pesar de la gran importancia en el desarrollo y sostenibilidad de la red. Es muy importante entender este concepto ya que cuando hablamos de Red, nos referimos a un grupo de más de tres organizaciones que trabajan juntas para alcanzar un objetivo colectivo que beneficia a cada miembro de la red. Esto nos demuestra la importancia de evaluar y analizar este fenómeno “Red Inter-organizacional” de forma más detallada para poder analizar que estructura, formas de gobierno, relaciones entre los miembros y entre otros factores, influyen en la efectividad y perdurabilidad de la Red Inter-organizacional. Esta investigación se desarrolla con el fin de plantear una aproximación al concepto de medición de la efectividad en Redes Inter-organizacionales. El trabajo se centrara en la recopilación de información y en la investigación documental, la cual se realizará por fases para brindarle al lector una mayor claridad y entendimiento sobre qué es Red, Red Inter-Organizacional, Efectividad. Y para finalizar se estudiara Efectividad en una Red Inter-organizacional.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este estudio de caso tiene como finalidad evidenciar los intereses políticos detrás de la mediación de Burkina Faso en el Conflicto de Costa de Marfil en el año 2007. En ese orden de ideas, este trabajo de grado analiza cómo la situación interna y externa de Burkina Faso, incidió en su decisión de mediar en el conflicto marfileño. Para lograr esto, en esta investigación se recurre a los conceptos de Interés Nacional y Poder Político propios de la Teoría del Realismo Político de Relaciones Internacionales, desarrollada por Hans Morgenthau, y al concepto de Seguridad del Régimen expuesto por John Clark. Además de las fuentes teóricas mencionadas anteriormente, se emplearon artículos y publicaciones de diversa índole sobre el fenómeno a estudiar.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

RESUMEN Introducción: El EQ-5D-Y proxy es un cuestionario genérico, de fácil comprensión y aplicación, que evalúa distintas dimensiones de la salud percibida. El objetivo del siguiente estudio fue describir por autoreporte la calidad de vida relacionada con la salud (CVRS) en una población escolar de Bogotá, Colombia, pertenecientes al estudio FUPRECOL. Métodos: Estudio descriptivo y transversal, realizado en 3.245 niños y 3.354 adolescentes, entre 9 y 17.9 años de edad, de 24 instituciones educativas oficiales de Bogotá, Colombia. Se aplicó de manera auto-administrada la versión validada al castellano por Olivares et al. (2009) del instrumento de CVRS infantil EQ-5D-Y proxy. Se analizaron los datos por medidas de tendencia central y se realizó una comparación de los observados en Colombia con estudios internacionales. Resultados: De la población evaluada, el 58,3% (n=3.848), fueron mujeres. En general, se observa puntuaciones elevadas en la CVRS en niños y adolescentes de ambos sexos. Al comparar por género, las dimensiones del EQ-5D-Y proxy “sentirse triste/preocupado o infeliz” y “tener dolor/malestar”, presentaron la mayor frecuencia de repuesta en el grupo de las mujeres. Al comparar los resultados de este estudio, por grupos de edad, con trabajos internacionales de niños y adolescentes, se observa que las puntuaciones del EQ-5D-Y proxy fueron superiores a los reportados en Suráfrica, Alemania e Italia. Conclusión: Se presentan valores de la CVRS según edad y sexo que podrán ser usados en la evaluación de la salud percibida en el ámbito escolar. Se hace necesario evaluar las propiedades psicométricas del EQ-5D-Y proxy en población Colombiana.