960 resultados para Maximum Degree Proximity algorithm (MAX-DPA)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The investigation of vortex-induced vibration on very short cylinders with two degrees of freedom has drawn the attention of a large number of researchers. Some investigations on such a problem are carried out in order to have a better understanding of the physics involved in vortex-induced motions of floating bodies such as offshore platforms. In this paper, experiments were carried out in a recirculating water channel over the range of Reynolds number 6000degrees of freedom, three different small mass ratios (m⁎=1.00; 2.62 and 4.36) and very low aspect ratios (0.3≤L/D≤2.0) were shown and the results were discussed in depth. Conversely to what would be expected for cylinders with very low aspect ratio, the results showed large motions in the transverse direction with maximum amplitudes around 1.5 diameters for cylinders with L/D=2.0, despite being smaller when the aspect ratio is reduced. Moreover, the response amplitudes presented high values around 0.4 diameters in the in-line direction. In fact, the large transverse motions were related to a strong coupling with the in-line responses, visibly identified in the plots of nondimensional frequency, as well as by the trajectories in the XY-plane, Lissajous figures, particularly in the case of m⁎=1.00 and L/D=2.0, when 8-shape trajectories were clearly observed. The case of m⁎=1.00 deserves more attention because of its smaller amplitude compared to the cases with the same aspect ratio and a larger mass ratio. This counter-intuitive behavior seems to be related to the energy transferring process from the steady stream to the oscillatory hydroelastic system. Finally, it is noteworthy that the characteristic of the “Strouhal-like” number decreases when the aspect ratio decreases, as also observed in previous works available in the literature, most of them for stationary cylinders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Precipitation retrieval over high latitudes, particularly snowfall retrieval over ice and snow, using satellite-based passive microwave spectrometers, is currently an unsolved problem. The challenge results from the large variability of microwave emissivity spectra for snow and ice surfaces, which can mimic, to some degree, the spectral characteristics of snowfall. This work focuses on the investigation of a new snowfall detection algorithm specific for high latitude regions, based on a combination of active and passive sensors able to discriminate between snowing and non snowing areas. The space-borne Cloud Profiling Radar (on CloudSat), the Advanced Microwave Sensor units A and B (on NOAA-16) and the infrared spectrometer MODIS (on AQUA) have been co-located for 365 days, from October 1st 2006 to September 30th, 2007. CloudSat products have been used as truth to calibrate and validate all the proposed algorithms. The methodological approach followed can be summarised into two different steps. In a first step, an empirical search for a threshold, aimed at discriminating the case of no snow, was performed, following Kongoli et al. [2003]. This single-channel approach has not produced appropriate results, a more statistically sound approach was attempted. Two different techniques, which allow to compute the probability above and below a Brightness Temperature (BT) threshold, have been used on the available data. The first technique is based upon a Logistic Distribution to represent the probability of Snow given the predictors. The second technique, defined Bayesian Multivariate Binary Predictor (BMBP), is a fully Bayesian technique not requiring any hypothesis on the shape of the probabilistic model (such as for instance the Logistic), which only requires the estimation of the BT thresholds. The results obtained show that both methods proposed are able to discriminate snowing and non snowing condition over the Polar regions with a probability of correct detection larger than 0.5, highlighting the importance of a multispectral approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Teil: Bekannte Konstruktionen. Die vorliegende Arbeit gibt zunächst einen ausführlichen Überblick über die bisherigen Entwicklungen auf dem klassischen Gebiet der Hyperflächen mit vielen Singularitäten. Die maximale Anzahl mu^n(d) von Singularitäten auf einer Hyperfläche vom Grad d im P^n(C) ist nur in sehr wenigen Fällen bekannt, im P^3(C) beispielsweise nur für d<=6. Abgesehen von solchen Ausnahmen existieren nur obere und untere Schranken. 2. Teil: Neue Konstruktionen. Für kleine Grade d ist es oft möglich, bessere Resultate zu erhalten als jene, die durch allgemeine Schranken gegeben sind. In dieser Arbeit beschreiben wir einige algorithmische Ansätze hierfür, von denen einer Computer Algebra in Charakteristik 0 benutzt. Unsere anderen algorithmischen Methoden basieren auf einer Suche über endlichen Körpern. Das Liften der so experimentell gefundenen Hyperflächen durch Ausnutzung ihrer Geometrie oder Arithmetik liefert beispielsweise eine Fläche vom Grad 7 mit $99$ reellen gewöhnlichen Doppelpunkten und eine Fläche vom Grad 9 mit 226 gewöhnlichen Doppelpunkten. Diese Konstruktionen liefern die ersten unteren Schranken für mu^3(d) für ungeraden Grad d>5, die die allgemeine Schranke übertreffen. Unser Algorithmus hat außerdem das Potential, auf viele weitere Probleme der algebraischen Geometrie angewendet zu werden. Neben diesen algorithmischen Methoden beschreiben wir eine Konstruktion von Hyperflächen vom Grad d im P^n mit vielen A_j-Singularitäten, j>=2. Diese Beispiele, deren Existenz wir mit Hilfe der Theorie der Dessins d'Enfants beweisen, übertreffen die bekannten unteren Schranken in den meisten Fällen und ergeben insbesondere neue asymptotische untere Schranken für j>=2, n>=3. 3. Teil: Visualisierung. Wir beschließen unsere Arbeit mit einer Anwendung unserer neuen Visualisierungs-Software surfex, die die Stärken mehrerer existierender Programme bündelt, auf die Konstruktion affiner Gleichungen aller 45 topologischen Typen reeller kubischer Flächen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation studies the geometric static problem of under-constrained cable-driven parallel robots (CDPRs) supported by n cables, with n ≤ 6. The task consists of determining the overall robot configuration when a set of n variables is assigned. When variables relating to the platform posture are assigned, an inverse geometric static problem (IGP) must be solved; whereas, when cable lengths are given, a direct geometric static problem (DGP) must be considered. Both problems are challenging, as the robot continues to preserve some degrees of freedom even after n variables are assigned, with the final configuration determined by the applied forces. Hence, kinematics and statics are coupled and must be resolved simultaneously. In this dissertation, a general methodology is presented for modelling the aforementioned scenario with a set of algebraic equations. An elimination procedure is provided, aimed at solving the governing equations analytically and obtaining a least-degree univariate polynomial in the corresponding ideal for any value of n. Although an analytical procedure based on elimination is important from a mathematical point of view, providing an upper bound on the number of solutions in the complex field, it is not practical to compute these solutions as it would be very time-consuming. Thus, for the efficient computation of the solution set, a numerical procedure based on homotopy continuation is implemented. A continuation algorithm is also applied to find a set of robot parameters with the maximum number of real assembly modes for a given DGP. Finally, the end-effector pose depends on the applied load and may change due to external disturbances. An investigation into equilibrium stability is therefore performed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In vielen Bereichen der industriellen Fertigung, wie zum Beispiel in der Automobilindustrie, wer- den digitale Versuchsmodelle (sog. digital mock-ups) eingesetzt, um die Entwicklung komplexer Maschinen m ̈oglichst gut durch Computersysteme unterstu ̈tzen zu k ̈onnen. Hierbei spielen Be- wegungsplanungsalgorithmen eine wichtige Rolle, um zu gew ̈ahrleisten, dass diese digitalen Pro- totypen auch kollisionsfrei zusammengesetzt werden k ̈onnen. In den letzten Jahrzehnten haben sich hier sampling-basierte Verfahren besonders bew ̈ahrt. Diese erzeugen eine große Anzahl von zuf ̈alligen Lagen fu ̈r das ein-/auszubauende Objekt und verwenden einen Kollisionserken- nungsmechanismus, um die einzelnen Lagen auf Gu ̈ltigkeit zu u ̈berpru ̈fen. Daher spielt die Kollisionserkennung eine wesentliche Rolle beim Design effizienter Bewegungsplanungsalgorith- men. Eine Schwierigkeit fu ̈r diese Klasse von Planern stellen sogenannte “narrow passages” dar, schmale Passagen also, die immer dort auftreten, wo die Bewegungsfreiheit der zu planenden Objekte stark eingeschr ̈ankt ist. An solchen Stellen kann es schwierig sein, eine ausreichende Anzahl von kollisionsfreien Samples zu finden. Es ist dann m ̈oglicherweise n ̈otig, ausgeklu ̈geltere Techniken einzusetzen, um eine gute Performance der Algorithmen zu erreichen.rnDie vorliegende Arbeit gliedert sich in zwei Teile: Im ersten Teil untersuchen wir parallele Kollisionserkennungsalgorithmen. Da wir auf eine Anwendung bei sampling-basierten Bewe- gungsplanern abzielen, w ̈ahlen wir hier eine Problemstellung, bei der wir stets die selben zwei Objekte, aber in einer großen Anzahl von unterschiedlichen Lagen auf Kollision testen. Wir im- plementieren und vergleichen verschiedene Verfahren, die auf Hu ̈llk ̈operhierarchien (BVHs) und hierarchische Grids als Beschleunigungsstrukturen zuru ̈ckgreifen. Alle beschriebenen Verfahren wurden auf mehreren CPU-Kernen parallelisiert. Daru ̈ber hinaus vergleichen wir verschiedene CUDA Kernels zur Durchfu ̈hrung BVH-basierter Kollisionstests auf der GPU. Neben einer un- terschiedlichen Verteilung der Arbeit auf die parallelen GPU Threads untersuchen wir hier die Auswirkung verschiedener Speicherzugriffsmuster auf die Performance der resultierenden Algo- rithmen. Weiter stellen wir eine Reihe von approximativen Kollisionstests vor, die auf den beschriebenen Verfahren basieren. Wenn eine geringere Genauigkeit der Tests tolerierbar ist, kann so eine weitere Verbesserung der Performance erzielt werden.rnIm zweiten Teil der Arbeit beschreiben wir einen von uns entworfenen parallelen, sampling- basierten Bewegungsplaner zur Behandlung hochkomplexer Probleme mit mehreren “narrow passages”. Das Verfahren arbeitet in zwei Phasen. Die grundlegende Idee ist hierbei, in der er- sten Planungsphase konzeptionell kleinere Fehler zuzulassen, um die Planungseffizienz zu erh ̈ohen und den resultierenden Pfad dann in einer zweiten Phase zu reparieren. Der hierzu in Phase I eingesetzte Planer basiert auf sogenannten Expansive Space Trees. Zus ̈atzlich haben wir den Planer mit einer Freidru ̈ckoperation ausgestattet, die es erlaubt, kleinere Kollisionen aufzul ̈osen und so die Effizienz in Bereichen mit eingeschr ̈ankter Bewegungsfreiheit zu erh ̈ohen. Optional erlaubt unsere Implementierung den Einsatz von approximativen Kollisionstests. Dies setzt die Genauigkeit der ersten Planungsphase weiter herab, fu ̈hrt aber auch zu einer weiteren Perfor- mancesteigerung. Die aus Phase I resultierenden Bewegungspfade sind dann unter Umst ̈anden nicht komplett kollisionsfrei. Um diese Pfade zu reparieren, haben wir einen neuartigen Pla- nungsalgorithmus entworfen, der lokal beschr ̈ankt auf eine kleine Umgebung um den bestehenden Pfad einen neuen, kollisionsfreien Bewegungspfad plant.rnWir haben den beschriebenen Algorithmus mit einer Klasse von neuen, schwierigen Metall- Puzzlen getestet, die zum Teil mehrere “narrow passages” aufweisen. Unseres Wissens nach ist eine Sammlung vergleichbar komplexer Benchmarks nicht ̈offentlich zug ̈anglich und wir fan- den auch keine Beschreibung von vergleichbar komplexen Benchmarks in der Motion-Planning Literatur.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The electron Monte Carlo (eMC) dose calculation algorithm in Eclipse (Varian Medical Systems) is based on the macro MC method and is able to predict dose distributions for high energy electron beams with high accuracy. However, there are limitations for low energy electron beams. This work aims to improve the accuracy of the dose calculation using eMC for 4 and 6 MeV electron beams of Varian linear accelerators. Improvements implemented into the eMC include (1) improved determination of the initial electron energy spectrum by increased resolution of mono-energetic depth dose curves used during beam configuration; (2) inclusion of all the scrapers of the applicator in the beam model; (3) reduction of the maximum size of the sphere to be selected within the macro MC transport when the energy of the incident electron is below certain thresholds. The impact of these changes in eMC is investigated by comparing calculated dose distributions for 4 and 6 MeV electron beams at source to surface distance (SSD) of 100 and 110 cm with applicators ranging from 6 x 6 to 25 x 25 cm(2) of a Varian Clinac 2300C/D with the corresponding measurements. Dose differences between calculated and measured absolute depth dose curves are reduced from 6% to less than 1.5% for both energies and all applicators considered at SSD of 100 cm. Using the original eMC implementation, absolute dose profiles at depths of 1 cm, d(max) and R50 in water lead to dose differences of up to 8% for applicators larger than 15 x 15 cm(2) at SSD 100 cm. Those differences are now reduced to less than 2% for all dose profiles investigated when the improved version of eMC is used. At SSD of 110 cm the dose difference for the original eMC version is even more pronounced and can be larger than 10%. Those differences are reduced to within 2% or 2 mm with the improved version of eMC. In this work several enhancements were made in the eMC algorithm leading to significant improvements in the accuracy of the dose calculation for 4 and 6 MeV electron beams of Varian linear accelerators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The resting and maximum in situ cardiac performance of Newfoundland Atlantic cod (Gadus morhua) acclimated to 10, 4 and 0°C were measured at their respective acclimation temperatures, and when acutely exposed to temperature changes: i.e. hearts from 10°C fish cooled to 4°C, and hearts from 4°C fish measured at 10 and 0°C. Intrinsic heart rate (f(H)) decreased from 41 beats min(-1) at 10°C to 33 beats min(-1) at 4°C and 25 beats min(-1) at 0°C. However, this degree of thermal dependency was not reflected in maximal cardiac output (Q(max) values were ~44, ~37 and ~34 ml min(-1) kg(-1) at 10, 4 and 0°C, respectively). Further, cardiac scope showed a slight positive compensation between 4 and 0°C (Q(10)=1.7), and full, if not a slight over compensation between 10 and 4°C (Q(10)=0.9). The maximal performance of hearts exposed to an acute decrease in temperature (i.e. from 10 to 4°C and 4 to 0°C) was comparable to that measured for hearts from 4°C- and 0°C-acclimated fish, respectively. In contrast, 4°C-acclimated hearts significantly out-performed 10°C-acclimated hearts when tested at a common temperature of 10°C (in terms of both Q(max) and power output). Only minimal differences in cardiac function were seen between hearts stimulated with basal (5 nmol l(-1)) versus maximal (200 nmol l(-1)) levels of adrenaline, the effects of which were not temperature dependent. These results: (1) show that maximum performance of the isolated cod heart is not compromised by exposure to cold temperatures; and (2) support data from other studies, which show that, in contrast to salmonids, cod cardiac performance/myocardial contractility is not dependent upon humoral adrenergic stimulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The GLAaS algorithm for pretreatment intensity modulation radiation therapy absolute dose verification based on the use of amorphous silicon detectors, as described in Nicolini et al. [G. Nicolini, A. Fogliata, E. Vanetti, A. Clivio, and L. Cozzi, Med. Phys. 33, 2839-2851 (2006)], was tested under a variety of experimental conditions to investigate its robustness, the possibility of using it in different clinics and its performance. GLAaS was therefore tested on a low-energy Varian Clinac (6 MV) equipped with an amorphous silicon Portal Vision PV-aS500 with electronic readout IAS2 and on a high-energy Clinac (6 and 15 MV) equipped with a PV-aS1000 and IAS3 electronics. Tests were performed for three calibration conditions: A: adding buildup on the top of the cassette such that SDD-SSD = d(max) and comparing measurements with corresponding doses computed at d(max), B: without adding any buildup on the top of the cassette and considering only the intrinsic water-equivalent thickness of the electronic portal imaging devices device (0.8 cm), and C: without adding any buildup on the top of the cassette but comparing measurements against doses computed at d(max). This procedure is similar to that usually applied when in vivo dosimetry is performed with solid state diodes without sufficient buildup material. Quantitatively, the gamma index (gamma), as described by Low et al. [D. A. Low, W. B. Harms, S. Mutic, and J. A. Purdy, Med. Phys. 25, 656-660 (1998)], was assessed. The gamma index was computed for a distance to agreement (DTA) of 3 mm. The dose difference deltaD was considered as 2%, 3%, and 4%. As a measure of the quality of results, the fraction of field area with gamma larger than 1 (%FA) was scored. Results over a set of 50 test samples (including fields from head and neck, breast, prostate, anal canal, and brain cases) and from the long-term routine usage, demonstrated the robustness and stability of GLAaS. In general, the mean values of %FA remain below 3% for deltaD equal or larger than 3%, while they are slightly larger for deltaD = 2% with %FA in the range from 3% to 8%. Since its introduction in routine practice, 1453 fields have been verified with GLAaS at the authors' institute (6 MV beam). Using a DTA of 3 mm and a deltaD of 4% the authors obtained %FA = 0.9 +/- 1.1 for the entire data set while, stratifying according to the dose calculation algorithm, they observed: %FA = 0.7 +/- 0.9 for fields computed with the analytical anisotropic algorithm and %FA = 2.4 +/- 1.3 for pencil-beam based fields with a statistically significant difference between the two groups. If data are stratified according to field splitting, they observed %FA = 0.8 +/- 1.0 for split fields and 1.0 +/- 1.2 for nonsplit fields without any significant difference.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Users of cochlear implant systems, that is, of auditory aids which stimulate the auditory nerve at the cochlea electrically, often complain about poor speech understanding in noisy environments. Despite the proven advantages of multimicrophone directional noise reduction systems for conventional hearing aids, only one major manufacturer has so far implemented such a system in a product, presumably because of the added power consumption and size. We present a physically small (intermicrophone distance 7 mm) and computationally inexpensive adaptive noise reduction system suitable for behind-the-ear cochlear implant speech processors. Supporting algorithms, which allow the adjustment of the opening angle and the maximum noise suppression, are proposed and evaluated. A portable real-time device for test in real acoustic environments is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation discusses structural-electrostatic modeling techniques, genetic algorithm based optimization and control design for electrostatic micro devices. First, an alternative modeling technique, the interpolated force model, for electrostatic micro devices is discussed. The method provides improved computational efficiency relative to a benchmark model, as well as improved accuracy for irregular electrode configurations relative to a common approximate model, the parallel plate approximation model. For the configuration most similar to two parallel plates, expected to be the best case scenario for the approximate model, both the parallel plate approximation model and the interpolated force model maintained less than 2.2% error in static deflection compared to the benchmark model. For the configuration expected to be the worst case scenario for the parallel plate approximation model, the interpolated force model maintained less than 2.9% error in static deflection while the parallel plate approximation model is incapable of handling the configuration. Second, genetic algorithm based optimization is shown to improve the design of an electrostatic micro sensor. The design space is enlarged from published design spaces to include the configuration of both sensing and actuation electrodes, material distribution, actuation voltage and other geometric dimensions. For a small population, the design was improved by approximately a factor of 6 over 15 generations to a fitness value of 3.2 fF. For a larger population seeded with the best configurations of the previous optimization, the design was improved by another 7% in 5 generations to a fitness value of 3.0 fF. Third, a learning control algorithm is presented that reduces the closing time of a radiofrequency microelectromechanical systems switch by minimizing bounce while maintaining robustness to fabrication variability. Electrostatic actuation of the plate causes pull-in with high impact velocities, which are difficult to control due to parameter variations from part to part. A single degree-of-freedom model was utilized to design a learning control algorithm that shapes the actuation voltage based on the open/closed state of the switch. Experiments on 3 test switches show that after 5-10 iterations, the learning algorithm lands the switch with an impact velocity not exceeding 0.2 m/s, eliminating bounce.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis develops high performance real-time signal processing modules for direction of arrival (DOA) estimation for localization systems. It proposes highly parallel algorithms for performing subspace decomposition and polynomial rooting, which are otherwise traditionally implemented using sequential algorithms. The proposed algorithms address the emerging need for real-time localization for a wide range of applications. As the antenna array size increases, the complexity of signal processing algorithms increases, making it increasingly difficult to satisfy the real-time constraints. This thesis addresses real-time implementation by proposing parallel algorithms, that maintain considerable improvement over traditional algorithms, especially for systems with larger number of antenna array elements. Singular value decomposition (SVD) and polynomial rooting are two computationally complex steps and act as the bottleneck to achieving real-time performance. The proposed algorithms are suitable for implementation on field programmable gated arrays (FPGAs), single instruction multiple data (SIMD) hardware or application specific integrated chips (ASICs), which offer large number of processing elements that can be exploited for parallel processing. The designs proposed in this thesis are modular, easily expandable and easy to implement. Firstly, this thesis proposes a fast converging SVD algorithm. The proposed method reduces the number of iterations it takes to converge to correct singular values, thus achieving closer to real-time performance. A general algorithm and a modular system design are provided making it easy for designers to replicate and extend the design to larger matrix sizes. Moreover, the method is highly parallel, which can be exploited in various hardware platforms mentioned earlier. A fixed point implementation of proposed SVD algorithm is presented. The FPGA design is pipelined to the maximum extent to increase the maximum achievable frequency of operation. The system was developed with the objective of achieving high throughput. Various modern cores available in FPGAs were used to maximize the performance and details of these modules are presented in detail. Finally, a parallel polynomial rooting technique based on Newton’s method applicable exclusively to root-MUSIC polynomials is proposed. Unique characteristics of root-MUSIC polynomial’s complex dynamics were exploited to derive this polynomial rooting method. The technique exhibits parallelism and converges to the desired root within fixed number of iterations, making this suitable for polynomial rooting of large degree polynomials. We believe this is the first time that complex dynamics of root-MUSIC polynomial were analyzed to propose an algorithm. In all, the thesis addresses two major bottlenecks in a direction of arrival estimation system, by providing simple, high throughput, parallel algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A tandem mass spectral database system consists of a library of reference spectra and a search program. State-of-the-art search programs show a high tolerance for variability in compound-specific fragmentation patterns produced by collision-induced decomposition and enable sensitive and specific 'identity search'. In this communication, performance characteristics of two search algorithms combined with the 'Wiley Registry of Tandem Mass Spectral Data, MSforID' (Wiley Registry MSMS, John Wiley and Sons, Hoboken, NJ, USA) were evaluated. The search algorithms tested were the MSMS search algorithm implemented in the NIST MS Search program 2.0g (NIST, Gaithersburg, MD, USA) and the MSforID algorithm (John Wiley and Sons, Hoboken, NJ, USA). Sample spectra were acquired on different instruments and, thus, covered a broad range of possible experimental conditions or were generated in silico. For each algorithm, more than 30,000 matches were performed. Statistical evaluation of the library search results revealed that principally both search algorithms can be combined with the Wiley Registry MSMS to create a reliable identification tool. It appears, however, that a higher degree of spectral similarity is necessary to obtain a correct match with the NIST MS Search program. This characteristic of the NIST MS Search program has a positive effect on specificity as it helps to avoid false positive matches (type I errors), but reduces sensitivity. Thus, particularly with sample spectra acquired on instruments differing in their Setup from tandem-in-space type fragmentation, a comparably higher number of false negative matches (type II errors) were observed by searching the Wiley Registry MSMS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE In this study, the "Progressive Resolution Optimizer PRO3" (Varian Medical Systems) is compared to the previous version "PRO2" with respect to its potential to improve dose sparing to the organs at risk (OAR) and dose coverage of the PTV for head and neck cancer patients. MATERIALS AND METHODS For eight head and neck cancer patients, volumetric modulated arc therapy (VMAT) treatment plans were generated in this study. All cases have 2-3 phases and the total prescribed dose (PD) was 60-72Gy in the PTV. The study is mainly focused on the phase 1 plans, which all have an identical PD of 54Gy, and complex PTV structures with an overlap to the parotids. Optimization was performed based on planning objectives for the PTV according to ICRU83, and with minimal dose to spinal cord, and parotids outside PTV. In order to assess the quality of the optimization algorithms, an identical set of constraints was used for both, PRO2 and PRO3. The resulting treatment plans were investigated with respect to dose distribution based on the analysis of the dose volume histograms. RESULTS For the phase 1 plans (PD=54Gy) the near maximum dose D2% of the spinal cord, could be minimized to 22±5 Gy with PRO3, as compared to 32±12Gy with PRO2, averaged for all patients. The mean dose to the parotids was also lower in PRO3 plans compared to PRO2, but the differences were less pronounced. A PTV coverage of V95%=97±1% could be reached with PRO3, as compared to 86±5% with PRO2. In clinical routine, these PRO2 plans would require modifications to obtain better PTV coverage at the cost of higher OAR doses. CONCLUSION A comparison between PRO3 and PRO2 optimization algorithms was performed for eight head and neck cancer patients. In general, the quality of VMAT plans for head and neck patients are improved with PRO3 as compared to PRO2. The dose to OARs can be reduced significantly, especially for the spinal cord. These reductions are achieved with better PTV coverage as compared to PRO2. The improved spinal cord sparing offers new opportunities for all types of paraspinal tumors and for re-irradiation of recurrent tumors or second malignancies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we solve a problem raised by Gutiérrez and Montanari about comparison principles for H−convex functions on subdomains of Heisenberg groups. Our approach is based on the notion of the sub-Riemannian horizontal normal mapping and uses degree theory for set-valued maps. The statement of the comparison principle combined with a Harnack inequality is applied to prove the Aleksandrov-type maximum principle, describing the correct boundary behavior of continuous H−convex functions vanishing at the boundary of horizontally bounded subdomains of Heisenberg groups. This result answers a question by Garofalo and Tournier. The sharpness of our results are illustrated by examples.