982 resultados para setup crossover


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The process of developing software that takes advantage of multiple processors is commonly referred to as parallel programming. For various reasons, this process is much harder than the sequential case. For decades, parallel programming has been a problem for a small niche only: engineers working on parallelizing mostly numerical applications in High Performance Computing. This has changed with the advent of multi-core processors in mainstream computer architectures. Parallel programming in our days becomes a problem for a much larger group of developers. The main objective of this thesis was to find ways to make parallel programming easier for them. Different aims were identified in order to reach the objective: research the state of the art of parallel programming today, improve the education of software developers about the topic, and provide programmers with powerful abstractions to make their work easier. To reach these aims, several key steps were taken. To start with, a survey was conducted among parallel programmers to find out about the state of the art. More than 250 people participated, yielding results about the parallel programming systems and languages in use, as well as about common problems with these systems. Furthermore, a study was conducted in university classes on parallel programming. It resulted in a list of frequently made mistakes that were analyzed and used to create a programmers' checklist to avoid them in the future. For programmers' education, an online resource was setup to collect experiences and knowledge in the field of parallel programming - called the Parawiki. Another key step in this direction was the creation of the Thinking Parallel weblog, where more than 50.000 readers to date have read essays on the topic. For the third aim (powerful abstractions), it was decided to concentrate on one parallel programming system: OpenMP. Its ease of use and high level of abstraction were the most important reasons for this decision. Two different research directions were pursued. The first one resulted in a parallel library called AthenaMP. It contains so-called generic components, derived from design patterns for parallel programming. These include functionality to enhance the locks provided by OpenMP, to perform operations on large amounts of data (data-parallel programming), and to enable the implementation of irregular algorithms using task pools. AthenaMP itself serves a triple role: the components are well-documented and can be used directly in programs, it enables developers to study the source code and learn from it, and it is possible for compiler writers to use it as a testing ground for their OpenMP compilers. The second research direction was targeted at changing the OpenMP specification to make the system more powerful. The main contributions here were a proposal to enable thread-cancellation and a proposal to avoid busy waiting. Both were implemented in a research compiler, shown to be useful in example applications, and proposed to the OpenMP Language Committee.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Optische Spektroskopie ist eine sehr wichtige Messtechnik mit einem hohen Potential für zahlreiche Anwendungen in der Industrie und Wissenschaft. Kostengünstige und miniaturisierte Spektrometer z.B. werden besonders für moderne Sensorsysteme “smart personal environments” benötigt, die vor allem in der Energietechnik, Messtechnik, Sicherheitstechnik (safety and security), IT und Medizintechnik verwendet werden. Unter allen miniaturisierten Spektrometern ist eines der attraktivsten Miniaturisierungsverfahren das Fabry Pérot Filter. Bei diesem Verfahren kann die Kombination von einem Fabry Pérot (FP) Filterarray und einem Detektorarray als Mikrospektrometer funktionieren. Jeder Detektor entspricht einem einzelnen Filter, um ein sehr schmales Band von Wellenlängen, die durch das Filter durchgelassen werden, zu detektieren. Ein Array von FP-Filter wird eingesetzt, bei dem jeder Filter eine unterschiedliche spektrale Filterlinie auswählt. Die spektrale Position jedes Bandes der Wellenlänge wird durch die einzelnen Kavitätshöhe des Filters definiert. Die Arrays wurden mit Filtergrößen, die nur durch die Array-Dimension der einzelnen Detektoren begrenzt werden, entwickelt. Allerdings erfordern die bestehenden Fabry Pérot Filter-Mikrospektrometer komplizierte Fertigungsschritte für die Strukturierung der 3D-Filter-Kavitäten mit unterschiedlichen Höhen, die nicht kosteneffizient für eine industrielle Fertigung sind. Um die Kosten bei Aufrechterhaltung der herausragenden Vorteile der FP-Filter-Struktur zu reduzieren, wird eine neue Methode zur Herstellung der miniaturisierten FP-Filtern mittels NanoImprint Technologie entwickelt und präsentiert. In diesem Fall werden die mehreren Kavitäten-Herstellungsschritte durch einen einzigen Schritt ersetzt, die hohe vertikale Auflösung der 3D NanoImprint Technologie verwendet. Seit dem die NanoImprint Technologie verwendet wird, wird das auf FP Filters basierende miniaturisierte Spectrometer nanospectrometer genannt. Ein statischer Nano-Spektrometer besteht aus einem statischen FP-Filterarray auf einem Detektorarray (siehe Abb. 1). Jeder FP-Filter im Array besteht aus dem unteren Distributed Bragg Reflector (DBR), einer Resonanz-Kavität und einen oberen DBR. Der obere und untere DBR sind identisch und bestehen aus periodisch abwechselnden dünnen dielektrischen Schichten von Materialien mit hohem und niedrigem Brechungsindex. Die optischen Schichten jeder dielektrischen Dünnfilmschicht, die in dem DBR enthalten sind, entsprechen einen Viertel der Design-Wellenlänge. Jeder FP-Filter wird einer definierten Fläche des Detektorarrays zugeordnet. Dieser Bereich kann aus einzelnen Detektorelementen oder deren Gruppen enthalten. Daher werden die Seitenkanal-Geometrien der Kavität aufgebaut, die dem Detektor entsprechen. Die seitlichen und vertikalen Dimensionen der Kavität werden genau durch 3D NanoImprint Technologie aufgebaut. Die Kavitäten haben Unterschiede von wenigem Nanometer in der vertikalen Richtung. Die Präzision der Kavität in der vertikalen Richtung ist ein wichtiger Faktor, der die Genauigkeit der spektralen Position und Durchlässigkeit des Filters Transmissionslinie beeinflusst.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Extensive grassland biomass for bioenergy production has long been subject of scientific research. The possibility of combining nature conservation goals with a profitable management while reducing competition with food production has created a strong interest in this topic. However, the botanical composition will play a key role for solid fuel quality of grassland biomass and will have effects on the combustion process by potentially causing corrosion, emission and slagging. On the other hand, botanical composition will affect anaerobic digestibility and thereby the biogas potential. In this thesis aboveground biomass from the Jena-Experiment plots was harvested in 2008 and 2009 and analysed for the most relevant chemical constituents effecting fuel quality and anaerobic digestibility. Regarding combustion, the following parameters were of main focus: higher heating value (HHV), gross energy yield (GE), ash content, ash softening temperature (AST), K, Ca, Mg, N, Cl and S content. For biogas production the following parameters were investigated: substrate specific methane yield (CH4 sub), area specific methane yield (CH4 area), crude fibre (CF), crude protein (CP), crude lipid (CL) and nitrogen-free extract (NfE). Furthermore, an improvement of the fuel quality was investigated through applying the Integrated generation of solid Fuel and Biogas from Biomass (IFBB) procedure. Through the specific setup of the Jena-Experiment it was possible to outline the changes of these parameters along two diversity gradients: (i) species richness (SR; 1 to 60 species) and (ii) functional group (grasses, legumes, small herbs and tall herbs) presence. This was a novel approach on investigating the bioenergy characteristic of extensive grassland biomass and gave detailed insight in the sward-composition¬ - bioenergy relations such as: (i) the most relevant SR effect was the increase of energy yield for both combustion (annual GE increased by 26% from SR8→16 and by 65% from SR8→60) and anaerobic digestion (annual CH4 area increased by 22% from SR8→16 and by 49% from SR8→60) through a strong interaction of SR with biomass yield; (ii) legumes play a key role for the utilization of grassland biomass for energy production as they increase the energy content of the substrate (HHV and CH4 sub) and the energy yield (GE and CH4 area); (iii) combustion is the conversion technique that will yield the highest energy output but requires an improvement of the solid fuel quality in order to reduce the risk of corrosion, emission and slagging related problems. This was achieved through applying the IFBB-procedure, with reductions in ash (by 23%), N (28%), K (85%), Cl (56%) and S (59%) and equal levels of concentrations along the SR gradient.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis, optical gain measurement setup based on variable stripe length method is designed, implemented and improved. The setup is characterized using inorganic and organic samples. The optical gain of spiro-quaterphenyl is calculated and compared with measurements from the setup. Films with various thicknesses of spiro-quaterphenyl, methoxy-spiro-quaterphenyl and phenoxy-spiro-quaterphenyl are deposited by a vacuum vapor deposition technique forming asymmetric slab waveguides. The optical properties, laser emission threshold, optical gain and loss coefficient for these films are measured. Additionally, the photodegradation during pumping process is investigated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die laserinduzierte Plasmaspektroskopie (LIPS) ist eine spektrochemische Elementanalyse zur Bestimmung der atomaren Zusammensetzung einer beliebigen Probe. Für die Analyse ist keine spezielle Probenpräparation nötig und kann unter atmosphärischen Bedingungen an Proben in jedem Aggregatzustand durchgeführt werden. Femtosekunden Laserpulse bieten die Vorteile einer präzisen Ablation mit geringem thermischen Schaden sowie einer hohen Reproduzierbarkeit. Damit ist fs-LIPS ein vielversprechendes Werkzeug für die Mikroanalyse technischer Proben, insbesondere zur Untersuchung ihres Ermüdungsverhaltens. Dabei ist interessant, wie sich die initiierten Mikrorisse innerhalb der materialspezifschen Struktur ausbreiten. In der vorliegenden Arbeit sollte daher ein schnelles und einfach zu handhabendes 3D-Rasterabbildungsverfahren zur Untersuchung der Rissausbreitung in TiAl, einer neuen Legierungsklasse, entwickelt werden. Dazu wurde fs-LIPS (30 fs, 785 nm) mit einem modifizierten Mikroskopaufbau (Objektiv: 50x/NA 0.5) kombiniert, welcher eine präzise, automatisierte Probenpositionierung ermöglicht. Spektrochemische Sensitivität und räumliches Auflösungsvermögen wurden in energieabhängigen Einzel- und Multipulsexperimenten untersucht. 10 Laserpulse pro Position mit einer Pulsenergie von je 100 nJ führten in TiAl zum bestmöglichen Kompromiss aus hohem S/N-Verhältnis von 10:1 und kleinen Lochstrukturen mit inneren Durchmessern von 1.4 µm. Die für das Verfahren entscheidende laterale Auflösung, dem minimalen Lochabstand bei konstantem LIPS-Signal, beträgt mit den obigen Parametern 2 µm und ist die bislang höchste bekannte Auflösung einer auf fs-LIPS basierenden Mikro-/Mapping-Analyse im Fernfeld. Fs-LIPS Scans von Teststrukturen sowie Mikrorissen in TiAl demonstrieren eine spektrochemische Sensitivität von 3 %. Scans in Tiefenrichtung erzielen mit denselben Parametern eine axiale Auflösung von 1 µm. Um die spektrochemische Sensitivität von fs-LIPS zu erhöhen und ein besseres Verständnis für die physikalischen Prozesse während der Laserablation zu erhalten, wurde in Pump-Probe-Experimenten untersucht, in wieweit fs-Doppelpulse den laserinduzierten Abtrag sowie die Plasmaemission beeinflussen. Dazu wurden in einem Mach-Zehnder-Interferometer Pulsabstände von 100 fs bis 2 ns realisiert, Gesamtenergie und Intensitätsverhältnis beider Pulse variiert sowie der Einfluss der Materialparameter untersucht. Sowohl das LIPS-Signal als auch die Lochstrukturen zeigen eine Abhängigkeit von der Verzögerungszeit. Diese wurden in vier verschiedene Regimes eingeteilt und den physikalischen Prozessen während der Laserablation zugeordnet: Die Thermalisierung des Elektronensystems für Pulsabstände unter 1 ps, Schmelzprozesse zwischen 1 und 10 ps, der Beginn des Abtrags nach mehreren 10 ps und die Expansion der Plasmawolke nach über 100 ps. Dabei wird das LIPS-Signal effizient verstärkt und bei 800 ps maximal. Die Lochdurchmesser ändern sich als Funktion des Pulsabstands wenig im Vergleich zur Tiefe. Die gesamte Abtragsrate variiert um maximal 50 %, während sich das LIPS-Signal vervielfacht: Für Ti und TiAl typischerweise um das Dreifache, für Al um das 10-fache. Die gemessenen Transienten zeigen eine hohe Reproduzierbarkeit, jedoch kaum eine Energie- bzw. materialspezifische Abhängigkeit. Mit diesen Ergebnissen wurde eine gezielte Optimierung der DP-LIPS-Parameter an Al durchgeführt: Bei einem Pulsabstand von 800 ps und einer Gesamtenergie von 65 nJ (vierfach über der Ablationsschwelle) wurde eine 40-fache Signalerhöhung bei geringerem Rauschen erzielt. Die Lochdurchmesser vergrößerten sich dabei um 44 % auf (650±150) nm, die Lochtiefe um das Doppelte auf (100±15) nm. Damit war es möglich, die spektrochemische Sensitivität von fs-LIPS zu erhöhen und gleichzeitig die hohe räumliche Auflösung aufrecht zu erhalten.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A novel process based on the principle of layered photolithography has been proposed and tested for making real three-dimensional micro-structures. An experimental setup was designed and built for doing experiments on this micro-fabrication process. An ultraviolet (UV) excimer laser at the wavelength of 248 nm was used as the light source and a single piece of photo-mask carrying a series of two dimensional (2D) patterns sliced from a three dimensional (3D) micro-part was employed for the photolithography process. The experiments were conducted on the solidification of liquid photopolymer from single layer to multiple layers. The single-layer photolithography experiments showed that certain photopolymers could be applied for the 3D micro-fabrication, and solid layers with sharp shapes could be formed from the liquid polymer identified. By using a unique alignment technique, multiple layers of photolithography was successfully realized for a micro-gear with features at 60 microns. Electroforming was also conducted for converting the photopolymer master to a metal cavity of the micro-gear, which proved that the process is feasible for micro-molding.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When underwater vehicles navigate close to the ocean floor, computer vision techniques can be applied to obtain motion estimates. A complete system to create visual mosaics of the seabed is described in this paper. Unfortunately, the accuracy of the constructed mosaic is difficult to evaluate. The use of a laboratory setup to obtain an accurate error measurement is proposed. The system consists on a robot arm carrying a downward looking camera. A pattern formed by a white background and a matrix of black dots uniformly distributed along the surveyed scene is used to find the exact image registration parameters. When the robot executes a trajectory (simulating the motion of a submersible), an image sequence is acquired by the camera. The estimated motion computed from the encoders of the robot is refined by detecting, to subpixel accuracy, the black dots of the image sequence, and computing the 2D projective transform which relates two consecutive images. The pattern is then substituted by a poster of the sea floor and the trajectory is executed again, acquiring the image sequence used to test the accuracy of the mosaicking system

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present a novel structure from motion (SfM) approach able to infer 3D deformable models from uncalibrated stereo images. Using a stereo setup dramatically improves the 3D model estimation when the observed 3D shape is mostly deforming without undergoing strong rigid motion. Our approach first calibrates the stereo system automatically and then computes a single metric rigid structure for each frame. Afterwards, these 3D shapes are aligned to a reference view using a RANSAC method in order to compute the mean shape of the object and to select the subset of points on the object which have remained rigid throughout the sequence without deforming. The selected rigid points are then used to compute frame-wise shape registration and to extract the motion parameters robustly from frame to frame. Finally, all this information is used in a global optimization stage with bundle adjustment which allows to refine the frame-wise initial solution and also to recover the non-rigid 3D model. We show results on synthetic and real data that prove the performance of the proposed method even when there is no rigid motion in the original sequence

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El síndrome de pinzamiento femoroacetabular es una entidad reciente secundaria al “desacoplamiento” de la articulación coxofemoral, por alteración en la morfología de la cabeza femoral o del acetábulo, lo cual puede ocasionar osteoartrosis a temprana edad. El propósito del estudio es describir los signos clínicos más frecuentes y los hallazgos imagenológicos del síndrome de pinzamiento femoroacetabular. Metodologia: se realizó un estudio retrospectivo descriptivo de la frecuencia de las manifestaciones clínicas del síndrome de pinzamiento femoroacetabular y hallazgos en artroresonancia magnética entre los meses de Enero de 2008 a junio de 2009. Se seleccionaron treinta y dos pacientes en la institución, y se evaluaron sus manifestaciones clínicas, examen físico e imágenes de artroresonancia magnética. Resultados: todos los pacientes presentaron dolor inguinal en el momento de la consulta, con presencia de test de pinzamiento positivo para todos, y el signo de la C en el 90%. El subtipo más frecuente fue PINCER 46.6 % seguido por el pinzamiento MIXTO 39.3%. El signo de crossover estuvo presente en el 100% de los pacientes con retroversión acetabular (12). El resultado de incapacidad funcional fue WOMAC 48.44 ±14.79 (IC 95% 43.1-53.77), nunca fue mayor a 50 y el dolor tuvo un promedio de 11 / 20. Discusión: la artroresonancia magnética es el examen de elección, cuyos hallazgos permiten comprender las manifestaciones clínicas. El ángulo alfa y la versión femoral constituyeron los signos más significativos, estos hallazgos son equiparables a los obtenidos en estudios donde la mayor parte de la población son mujeres de edad media.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper deals with the problem of identification and semiactive control of smart structures subject to unknown external disturbances such as earthquake, wind, etc. The experimental setup used is a 6-story test structure equipped with shear-mode semiactive magnetorheological actuators being installed in WUSCEEL. The experimental results obtained have verified the effectiveness of the proposed control algorithms

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper deals with the problem of semiactive vibration control of civil engineering structures subject to unknown external disturbances (for example, earthquakes, winds, etc.). Two kinds of semiactive controllers are proposed based on the backstepping control technique. The experimental setup used is a 6-story test structure equipped with shear-mode semiactive magnetorheological dampers being installed in the Washington University Structural Control and Earthquake Engineering Laboratory (WUSCEEL). The experimental results obtained have verified the effectiveness of the proposed control algorithms

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objetivos: Determinar si existe diferencia en la ganancia interdialítica entre los pacientes al ser tratados con flujo de dializado (Qd) de 400 mL/min y 500 mL/min. Diseño: Se realizó un estudio de intervención, cruzado, aleatorizado, doble ciego en pacientes con enfermedad renal crónica en hemodiálisis para determinar diferencias en la ganancia de peso interdialítica entre los pacientes tratados con flujo de dializado (Qd) de 400 ml/min y 500 ml/min. Pacientes: Se analizaron datos de 46 pacientes en hemodiálisis crónica con Qd de 400 ml/min y 45 con Qd de 500 ml/min. Análisis: La prueba de hipótesis para evaluar diferencias en la ganancia interdialítica y las otras variables entre los grupos se realizó mediante la prueba T para muestras pareadas. Para el análisis de correlación se calculó el coeficiente de Pearson. Resultados: No hubo diferencia significativa en ganancia interdialítica usando Qd de 400 ml/min vs 500 ml/min (2.37 ± 0.7 vs 2.41 ± 0.6, p=0.41) ni en Kt/V (1.57 ± 0.25 vs 1.59 ± 0.23, p = 0.45), potasio (4.9 ± 1.1 vs 5.1 ± 1.0, p=0.45), fosforo (4.5 ± 1.2 vs 4.4 ± 1.2, p=0.56) o hemoglobina (11.3 ± 1.8 vs 11.3 ± 1.6, p=0.96). Conclusiones: En pacientes con peso ≤ 65 Kg el uso de Qd de 400 ml/min no se asocia con menor ganancia interdialítica de peso. No hay diferencia en la eficiencia de diálisis lo que sugiere que es una intervención segura a corto plazo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introducción: la contaminación atmosférica no solo tiene efectos sobre el sistema respiratorio sino también sobre el cardiovascular. El objetivo de este estudio es generar evidencia que permita establecer una asociación entre el infarto agudo del miocardio y la concentración de PM10 en el ambiente como un estudio preliminar para un grupo de pacientes en Bogotá. Metodología: la asociación entre la concentración del material particulado (en este caso PM10 medido en la estación más cercana del lugar reportado por el paciente) y el infarto agudo del miocardio se estableció utilizando el diseño case crossover. Se utilizó información de las historias clínicas de los pacientes con infarto agudo del miocardio que ingresaron al Servicio de Urgencias de la FSFB, y las concentraciones de PM10 medido en la estación más cercana al lugar de inicio de los síntomas de síndrome coronario agudo, reportado por el paciente. Resultados: se encontró que la asociación entre la concentración de PM10 y el diagnóstico de infarto agudo del miocardio es estadísticamente significativa teniendo en cuenta tres momentos de control: 2 horas antes del evento, 24 horas antes del evento y 48 horas antes del evento. Discusión: este estudio sugiere que las altas concentraciones de material particulado en el ambiente son un factor de riesgo para el desarrollo de infarto agudo del miocardio especialmente en personas con enfermedad coronaria subyacente. Con esta investigación se demuestra la importancia de generar acciones que disminuyan la contaminación de la ciudad y de esta forma proteger la salud de las personas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El manejo de las lesiones de plejo braquial ha sido ampliamente discutido e investigado, especialmente en las lesiones cerradas por tracción. Las lesiones abiertas con compromiso vascular, muchas veces comprometen la viabilidad de la extremidad o la vida del paciente; son de difícil manejo con prioridades distintas, tiempos de establecimiento de los procedimientos que varían respecto a los hallazgos con resultados funcionales pobres por el diagnostico tardío de la lesión nerviosa. Se plantean interrogantes desde el punto de vista vascular y de la lesión nerviosa. se realiza una revisión sistemática de la literatura, encontrando puntos importantes con respecto a la exploración, el momento de la reparación nerviosa pero sin establecer resultados funcionales claros ante la deficiencia metodológica de los estudios encontrados.