931 resultados para single stage power conversion
Resumo:
Post-exercise hypotension (PEH), the reduction of blood pressure (BP) after a single bout of exercise, is of great clinical relevance. As the magnitude of this phenomenon seems to be dependent on pre-exercise BP values and chronic exercise training in hypertensive individuals leads to BP reduction; PEH could be attenuated in this context. Therefore, the aim of the present study was to investigate whether PEH remains constant after resistance exercise training. Fifteen hypertensive individuals (46 +/- 8 years; 88 +/- 16 kg; 30 +/- 6% body fat; 150 +/- 13/93 +/- 5mm Hg systolic/diastolic BP, SBP/DBP) were withdrawn from medication and performed 12 weeks of moderate-intensity resistance training. Parameters of cardiovascular function were evaluated before and after the training period. Before the training program, hypertensive volunteers showed significant PEH. After an acute moderate-intensity resistance exercise session with three sets of 12 repetitions (60% of one repetition maximum) and a total of seven exercises, BP was reduced post-exercise (45-60 min) by an average of aproximately -22mm Hg for SBP, -8mm Hg for DBP and -13 mm Hg for mean arterial pressure (P<0.05). However, this acute hypotensive effect did not occur after the 12 weeks of training (P>0.05). In conclusion, our data demonstrate that PEH, following an acute exercise session, can indeed be attenuated after 12 weeks of training in hypertensive stage 1 patients not using antihypertensive medication. Journal of Human Hypertension (2012) 26, 533-539; doi:10.1038/jhh.2011.67; published online 7 July 2011
Resumo:
Abstract Background Cardiovascular disease is the leading cause of death in Brazil, and hypertension is its major risk factor. The benefit of its drug treatment to prevent major cardiovascular events was consistently demonstrated. Angiotensin-receptor blockers (ARB) have been the preferential drugs in the management of hypertension worldwide, despite the absence of any consistent evidence of advantage over older agents, and the concern that they may be associated with lower renal protection and risk for cancer. Diuretics are as efficacious as other agents, are well tolerated, have longer duration of action and low cost, but have been scarcely compared with ARBs. A study comparing diuretic and ARB is therefore warranted. Methods/design This is a randomized, double-blind, clinical trial, comparing the association of chlorthalidone and amiloride with losartan as first drug option in patients aged 30 to 70 years, with stage I hypertension. The primary outcomes will be variation of blood pressure by time, adverse events and development or worsening of microalbuminuria and of left ventricular hypertrophy in the EKG. The secondary outcomes will be fatal or non-fatal cardiovascular events: myocardial infarction, stroke, heart failure, evidence of new subclinical atherosclerosis and sudden death. The study will last 18 months. The sample size will be of 1200 participants for group in order to confer enough power to test for all primary outcomes. The project was approved by the Ethics committee of each participating institution. Discussion The putative pleiotropic effects of ARB agents, particularly renal protection, have been disputed, and they have been scarcely compared with diuretics in large clinical trials, despite that they have been at least as efficacious as newer agents in managing hypertension. Even if the null hypothesis is not rejected, the information will be useful for health care policy to treat hypertension in Brazil. Clinical trials registration number ClinicalTrials.gov: NCT00971165
Resumo:
This work proposes a computational tool to assist power system engineers in the field tuning of power system stabilizers (PSSs) and Automatic Voltage Regulators (AVRs). The outcome of this tool is a range of gain values for theses controllers within which there is a theoretical guarantee of stability for the closed-loop system. This range is given as a set of limit values for the static gains of the controllers of interest, in such a way that the engineer responsible for the field tuning of PSSs and/or AVRs can be confident with respect to system stability when adjusting the corresponding static gains within this range. This feature of the proposed tool is highly desirable from a practical viewpoint, since the PSS and AVR commissioning stage always involve some readjustment of the controller gains to account for the differences between the nominal model and the actual behavior of the system. By capturing these differences as uncertainties in the model, this computational tool is able to guarantee stability for the whole uncertain model using an approach based on linear matrix inequalities. It is also important to remark that the tool proposed in this paper can also be applied to other types of parameters of either PSSs or Power Oscillation Dampers, as well as other types of controllers (such as speed governors, for example). To show its effectiveness, applications of the proposed tool to two benchmarks for small signal stability studies are presented at the end of this paper.
Resumo:
Network reconfiguration for service restoration (SR) in distribution systems is a complex optimization problem. For large-scale distribution systems, it is computationally hard to find adequate SR plans in real time since the problem is combinatorial and non-linear, involving several constraints and objectives. Two Multi-Objective Evolutionary Algorithms that use Node-Depth Encoding (NDE) have proved able to efficiently generate adequate SR plans for large distribution systems: (i) one of them is the hybridization of the Non-Dominated Sorting Genetic Algorithm-II (NSGA-II) with NDE, named NSGA-N; (ii) the other is a Multi-Objective Evolutionary Algorithm based on subpopulation tables that uses NDE, named MEAN. Further challenges are faced now, i.e. the design of SR plans for larger systems as good as those for relatively smaller ones and for multiple faults as good as those for one fault (single fault). In order to tackle both challenges, this paper proposes a method that results from the combination of NSGA-N, MEAN and a new heuristic. Such a heuristic focuses on the application of NDE operators to alarming network zones according to technical constraints. The method generates similar quality SR plans in distribution systems of significantly different sizes (from 3860 to 30,880 buses). Moreover, the number of switching operations required to implement the SR plans generated by the proposed method increases in a moderate way with the number of faults.
Resumo:
A new conversion structure for three-phase grid-connected photovoltaic (PV) generation plants is presented and discussed in this Thesis. The conversion scheme is based on two insulated PV arrays, each one feeding the dc bus of a standard 2-level three-phase voltage source inverter (VSI). Inverters are connected to the grid by a traditional three-phase transformer having open-end windings at inverters side and either star or delta connection at the grid side. The resulting conversion structure is able to perform as a multilevel VSI, equivalent to a 3-level inverter, doubling the power capability of a single VSI with given voltage and current ratings. Different modulation schemes able to generate proper multilevel voltage waveforms have been discussed and compared. They include known algorithms, some their developments, and new original approaches. The goal was to share the grid power with a given ratio between the two VSI within each cycle period of the PWM, being the PWM pattern suitable for the implementation in industrial DSPs. It has been shown that an extension of the modulation methods for standard two-level inverter can provide a elegant solution for dual two-level inverter. An original control method has been introduced to regulate the dc-link voltages of each VSI, according to the voltage reference given by a single MPPT controller. A particular MPPT algorithm has been successfully tested, based on the comparison of the operating points of the two PV arrays. The small deliberately introduced difference between two operating dc voltages leads towards the MPP in a fast and accurate manner. Either simulation or experimental tests, or even both, always accompanied theoretical developments. For the simulation, the Simulink tool of Matlab has been adopted, whereas the experiments have been carried out by a full-scale low-voltage prototype of the whole PV generation system. All the research work was done at the Lab of the Department of Electrical Engineering, University of Bologna.
Resumo:
A sample scanning confocal optical microscope (SCOM) was designed and constructed in order to perform local measurements of fluorescence, light scattering and Raman scattering. This instrument allows to measure time resolved fluorescence, Raman scattering and light scattering from the same diffraction limited spot. Fluorescence from single molecules and light scattering from metallic nanoparticles can be studied. First, the electric field distribution in the focus of the SCOM was modelled. This enables the design of illumination modes for different purposes, such as the determination of the three-dimensional orientation of single chromophores. Second, a method for the calculation of the de-excitation rates of a chromophore was presented. This permits to compare different detection schemes and experimental geometries in order to optimize the collection of fluorescence photons. Both methods were combined to calculate the SCOM fluorescence signal of a chromophore in a general layered system. The fluorescence excitation and emission of single molecules through a thin gold film was investigated experimentally and modelled. It was demonstrated that, due to the mediation of surface plasmons, single molecule fluorescence near a thin gold film can be excited and detected with an epi-illumination scheme through the film. Single molecule fluorescence as close as 15nm to the gold film was studied in this manner. The fluorescence dynamics (fluorescence blinking and excited state lifetime) of single molecules was studied in the presence and in the absence of a nearby gold film in order to investigate the influence of the metal on the electronic transition rates. The trace-histogram and the autocorrelation methods for the analysis of single molecule fluorescence blinking were presented and compared via the analysis of Monte-Carlo simulated data. The nearby gold influences the total decay rate in agreement to theory. The gold presence produced no influence on the ISC rate from the excited state to the triplet but increased by a factor of 2 the transition rate from the triplet to the singlet ground state. The photoluminescence blinking of Zn0.42Cd0.58Se QDs on glass and ITO substrates was investigated experimentally as a function of the excitation power (P) and modelled via Monte-Carlo simulations. At low P, it was observed that the probability of a certain on- or off-time follows a negative power-law with exponent near to 1.6. As P increased, the on-time fraction reduced on both substrates whereas the off-times did not change. A weak residual memory effect between consecutive on-times and consecutive off-times was observed but not between an on-time and the adjacent off-time. All of this suggests the presence of two independent mechanisms governing the lifetimes of the on- and off-states. The simulated data showed Poisson-distributed off- and on-intensities, demonstrating that the observed non-Poissonian on-intensity distribution of the QDs is not a product of the underlying power-law probability and that the blinking of QDs occurs between a non-emitting off-state and a distribution of emitting on-states with different intensities. All the experimentally observed photo-induced effects could be accounted for by introducing a characteristic lifetime tPI of the on-state in the simulations. The QDs on glass presented a tPI proportional to P-1 suggesting the presence of a one-photon process. Light scattering images and spectra of colloidal and C-shaped gold nano-particles were acquired. The minimum size of a metallic scatterer detectable with the SCOM lies around 20 nm.
Resumo:
A novel design based on electric field-free open microwell arrays for the automated continuous-flow sorting of single or small clusters of cells is presented. The main feature of the proposed device is the parallel analysis of cell-cell and cell-particle interactions in each microwell of the array. High throughput sample recovery with a fast and separate transfer from the microsites to standard microtiter plates is also possible thanks to the flexible printed circuit board technology which permits to produce cost effective large area arrays featuring geometries compatible with laboratory equipment. The particle isolation is performed via negative dielectrophoretic forces which convey the particles’ into the microwells. Particles such as cells and beads flow in electrically active microchannels on whose substrate the electrodes are patterned. The introduction of particles within the microwells is automatically performed by generating the required feedback signal by a microscope-based optical counting and detection routine. In order to isolate a controlled number of particles we created two particular configurations of the electric field within the structure. The first one permits their isolation whereas the second one creates a net force which repels the particles from the microwell entrance. To increase the parallelism at which the cell-isolation function is implemented, a new technique based on coplanar electrodes to detect particle presence was implemented. A lock-in amplifying scheme was used to monitor the impedance of the channel perturbed by flowing particles in high-conductivity suspension mediums. The impedance measurement module was also combined with the dielectrophoretic focusing stage situated upstream of the measurement stage, to limit the measured signal amplitude dispersion due to the particles position variation within the microchannel. In conclusion, the designed system complies with the initial specifications making it suitable for cellomics and biotechnology applications.
Resumo:
The dissertation presented here deals with high-precision Penning trap mass spectrometry on short-lived radionuclides. Owed to the ability of revealing all nucleonic interactions, mass measurements far off the line of ß-stability are expected to bring new insight to the current knowledge of nuclear properties and serve to test the predictive power of mass models and formulas. In nuclear astrophysics, atomic masses are fundamental parameters for the understanding of the synthesis of nuclei in the stellar environments. This thesis presents ten mass values of radionuclides around A = 90 interspersed in the predicted rp-process pathway. Six of them have been experimentally determined for the first time. The measurements have been carried out at the Penning-trap mass spectrometer SHIPTRAP using the destructive time-of-fligh ion-cyclotron-resonance (TOF-ICR) detection technique. Given the limited performance of the TOF-ICR detection when trying to investigate heavy/superheavy species with small production cross sections (σ< 1 μb), a new detection system is found to be necessary. Thus, the second part of this thesis deals with the commissioning of a cryogenic double-Penning trap system for the application of a highly-sensitive, narrow-band Fourier-transform ion-cyclotron-resonance (FT-ICR) detection technique. With the non-destructive FT-ICR detection method a single singly-charged trapped ion will provide the required information to determine its mass. First off-line tests of a new detector system based on a channeltron with an attached conversion dynode, of a cryogenic pumping barrier, to guarantee ultra-high vacuum conditions during mass determination, and of the detection electronics for the required single-ion sensitivity are reported.
Resumo:
This thesis presents several data processing and compression techniques capable of addressing the strict requirements of wireless sensor networks. After introducing a general overview of sensor networks, the energy problem is introduced, dividing the different energy reduction approaches according to the different subsystem they try to optimize. To manage the complexity brought by these techniques, a quick overview of the most common middlewares for WSNs is given, describing in detail SPINE2, a framework for data processing in the node environment. The focus is then shifted on the in-network aggregation techniques, used to reduce data sent by the network nodes trying to prolong the network lifetime as long as possible. Among the several techniques, the most promising approach is the Compressive Sensing (CS). To investigate this technique, a practical implementation of the algorithm is compared against a simpler aggregation scheme, deriving a mixed algorithm able to successfully reduce the power consumption. The analysis moves from compression implemented on single nodes to CS for signal ensembles, trying to exploit the correlations among sensors and nodes to improve compression and reconstruction quality. The two main techniques for signal ensembles, Distributed CS (DCS) and Kronecker CS (KCS), are introduced and compared against a common set of data gathered by real deployments. The best trade-off between reconstruction quality and power consumption is then investigated. The usage of CS is also addressed when the signal of interest is sampled at a Sub-Nyquist rate, evaluating the reconstruction performance. Finally the group sparsity CS (GS-CS) is compared to another well-known technique for reconstruction of signals from an highly sub-sampled version. These two frameworks are compared again against a real data-set and an insightful analysis of the trade-off between reconstruction quality and lifetime is given.
Resumo:
This thesis is concerned with the adsorption and detachment of polymers at planar, rigid surfaces. We have carried out a systematic investigation of adsorption of polymers using analytical techniques as well as Monte Carlo simulations with a coarse grained off-lattice bead spring model. The investigation was carried out in three stages. In the first stage the adsorption of a single multiblock AB copolymer on a solid surface was investigated by means of simulations and scaling analysis. It was shown that the problem could be mapped onto an effective homopolymer problem. Our main result was the phase diagram of regular multiblock copolymers which shows an increase in the critical adsorption potential of the substrate with decreasing size of blocks. We also considered the adsorption of random copolymers which was found to be well described within the annealed disorder approximation. In the next phase, we studied the adsorption kinetics of a single polymer on a flat, structureless surface in the regime of strong physisorption. The idea of a ’stem-flower’ polymer conformation and the mechanism of ’zipping’ during the adsorption process were used to derive a Fokker-Planck equation with reflecting boundary conditions for the time dependent probability distribution function (PDF) of the number of adsorbed monomers. The numerical solution of the time-dependent PDF obtained from a discrete set of coupled differential equations were shown to be in perfect agreement with Monte Carlo simulation results. Finally we studied force induced desorption of a polymer chain adsorbed on an attractive surface. We approached the problem within the framework of two different statistical ensembles; (i) by keeping the pulling force fixed while measuring the position of the polymer chain end, and (ii) by measuring the force necessary to keep the chain end at fixed distance above the adsorbing plane. In the first case we treated the problem within the framework of the Grand Canonical Ensemble approach and derived analytic expressions for the various conformational building blocks, characterizing the structure of an adsorbed linear polymer chain, subject to pulling force of fixed strength. The main result was the phase diagram of a polymer chain under pulling. We demonstrated a novel first order phase transformation which is dichotomic i.e. phase coexistence is not possible. In the second case, we carried out our study in the “fixed height” statistical ensemble where one measures the fluctuating force, exerted by the chain on the last monomer when a chain end is kept fixed at height h over the solid plane at different adsorption strength ε. The phase diagram in the h − ε plane was calculated both analytically and by Monte Carlo simulations. We demonstrated that in the vicinity of the polymer desorption transition a number of properties like fluctuations and probability distribution of various quantities behave differently, if h rather than the force, f, is used as an independent control parameter.
Resumo:
The objective of this thesis is the power transient analysis concerning experimental devices placed within the reflector of Jules Horowitz Reactor (JHR). Since JHR material testing facility is designed to achieve 100 MW core thermal power, a large reflector hosts fissile material samples that are irradiated up to total relevant power of 3 MW. MADISON devices are expected to attain 130 kW, conversely ADELINE nominal power is of some 60 kW. In addition, MOLFI test samples are envisaged to reach 360 kW for what concerns LEU configuration and up to 650 kW according to HEU frame. Safety issues concern shutdown transients and need particular verifications about thermal power decreasing of these fissile samples with respect to core kinetics, as far as single device reactivity determination is concerned. Calculation model is conceived and applied in order to properly account for different nuclear heating processes and relative time-dependent features of device transients. An innovative methodology is carried out since flux shape modification during control rod insertions is investigated regarding the impact on device power through core-reflector coupling coefficients. In fact, previous methods considering only nominal core-reflector parameters are then improved. Moreover, delayed emissions effect is evaluated about spatial impact on devices of a diffuse in-core delayed neutron source. Delayed gammas transport related to fission products concentration is taken into account through evolution calculations of different fuel compositions in equilibrium cycle. Provided accurate device reactivity control, power transients are then computed for every sample according to envisaged shutdown procedures. Results obtained in this study are aimed at design feedback and reactor management optimization by JHR project team. Moreover, Safety Report is intended to utilize present analysis for improved device characterization.
Resumo:
Theoretical models are developed for the continuous-wave and pulsed laser incision and cut of thin single and multi-layer films. A one-dimensional steady-state model establishes the theoretical foundations of the problem by combining a power-balance integral with heat flow in the direction of laser motion. In this approach, classical modelling methods for laser processing are extended by introducing multi-layer optical absorption and thermal properties. The calculation domain is consequently divided in correspondence with the progressive removal of individual layers. A second, time-domain numerical model for the short-pulse laser ablation of metals accounts for changes in optical and thermal properties during a single laser pulse. With sufficient fluence, the target surface is heated towards its critical temperature and homogeneous boiling or "phase explosion" takes place. Improvements are seen over previous works with the more accurate calculation of optical absorption and shielding of the incident beam by the ablation products. A third, general time-domain numerical laser processing model combines ablation depth and energy absorption data from the short-pulse model with two-dimensional heat flow in an arbitrary multi-layer structure. Layer removal is the result of both progressive short-pulse ablation and classical vaporisation due to long-term heating of the sample. At low velocity, pulsed laser exposure of multi-layer films comprising aluminium-plastic and aluminium-paper are found to be characterised by short-pulse ablation of the metallic layer and vaporisation or degradation of the others due to thermal conduction from the former. At high velocity, all layers of the two films are ultimately removed by vaporisation or degradation as the average beam power is increased to achieve a complete cut. The transition velocity between the two characteristic removal types is shown to be a function of the pulse repetition rate. An experimental investigation validates the simulation results and provides new laser processing data for some typical packaging materials.
Resumo:
The international growing concern for the human exposure to magnetic fields generated by electric power lines has unavoidably led to imposing legal limits. Respecting these limits, implies being able to calculate easily and accurately the generated magnetic field also in complex configurations. Twisting of phase conductors is such a case. The consolidated exact and approximated theory regarding a single-circuit twisted three-phase power cable line has been reported along with the proposal of an innovative simplified formula obtained by means of an heuristic procedure. This formula, although being dramatically simpler, is proven to be a good approximation of the analytical formula and at the same time much more accurate than the approximated formula found in literature. The double-circuit twisted three-phase power cable line case has been studied following different approaches of increasing complexity and accuracy. In this framework, the effectiveness of the above-mentioned innovative formula is also examined. The experimental verification of the correctness of the twisted double-circuit theoretical analysis has permitted its extension to multiple-circuit twisted three-phase power cable lines. In addition, appropriate 2D and, in particularly, 3D numerical codes for simulating real existing overhead power lines for the calculation of the magnetic field in their vicinity have been created. Finally, an innovative ‘smart’ measurement and evaluation system of the magnetic field is being proposed, described and validated, which deals with the experimentally-based evaluation of the total magnetic field B generated by multiple sources in complex three-dimensional arrangements, carried out on the basis of the measurement of the three Cartesian field components and their correlation with the field currents via multilinear regression techniques. The ultimate goal is verifying that magnetic induction intensity is within the prescribed limits.
Resumo:
We have realized a Data Acquisition chain for the use and characterization of APSEL4D, a 32 x 128 Monolithic Active Pixel Sensor, developed as a prototype for frontier experiments in high energy particle physics. In particular a transition board was realized for the conversion between the chip and the FPGA voltage levels and for the signal quality enhancing. A Xilinx Spartan-3 FPGA was used for real time data processing, for the chip control and the communication with a Personal Computer through a 2.0 USB port. For this purpose a firmware code, developed in VHDL language, was written. Finally a Graphical User Interface for the online system monitoring, hit display and chip control, based on windows and widgets, was realized developing a C++ code and using Qt and Qwt dedicated libraries. APSEL4D and the full acquisition chain were characterized for the first time with the electron beam of the transmission electron microscope and with 55Fe and 90Sr radioactive sources. In addition, a beam test was performed at the T9 station of the CERN PS, where hadrons of momentum of 12 GeV/c are available. The very high time resolution of APSEL4D (up to 2.5 Mfps, but used at 6 kfps) was fundamental in realizing a single electron Young experiment using nanometric double slits obtained by a FIB technique. On high statistical samples, it was possible to observe the interference and diffractions of single isolated electrons traveling inside a transmission electron microscope. For the first time, the information on the distribution of the arrival time of the single electrons has been extracted.
Resumo:
Plasmonen sind die kollektive resonante Anregung von Leitungselektronen. Vom Licht angeregternPlasmonen in subwellenlängen-grossen Nanopartikeln heissen Partikelplasmonen und sind vielversprechende Kandidaten für zukünftige Mikrosensoren wegen der starken Abhängigkeit der Resonanz an extern steuerbaren Parametern, wie die optischen Eigenschaften des umgebenden Mediums und die elektrische Ladung der Nanopartikel. Die extrem hohe Streue_zienz von Partikelplasmonen erlaubt eine einfache Beobachtung einzelner Nanopartikel in einem Mikroskop.rnDie Anforderung, schnell eine statistisch relevante Anzahl von Datenpunkten sammeln zu können,rnund die wachsende Bedeutung von plasmonischen (vor allem Gold-) Nanopartikeln für Anwendungenrnin der Medizin, hat nach der Entwicklung von automatisierten Mikroskopen gedrängt, die im bis dahin nur teilweise abgedeckten spektralen Fenster der biologischen Gewebe (biologisches Fenster) von 650 bis 900nm messen können. Ich stelle in dieser Arbeit das Plasmoscope vor, das genau unter Beobachtung der genannten Anforderungen entworfen wurde, in dem (1) ein einstellbarer Spalt in die Eingangsö_nung des Spektrometers, die mit der Bildebene des Mikroskops zusammenfällt, gesetzt wurde, und (2) einem Piezo Scantisch, der es ermöglicht, die Probe durch diesen schmalen Spalt abzurastern. Diese Verwirklichung vermeidet optische Elemente, die im nahen Infra-Rot absorbieren.rnMit dem Plasmoscope untersuche ich die plasmonische Sensitivität von Gold- und Silbernanostrnäbchen, d.h. die Plasmon-Resonanzverschiebung in Abhängigkeit mit der Änderung des umgebendenrnMediums. Die Sensitivität ist das Mass dafür, wie gut die Nanopartikeln Materialänderungenrnin ihrer Umgebung detektieren können, und damit ist es immens wichtig zu wissen, welche Parameterrndie Sensitivität beein_ussen. Ich zeige hier, dass Silbernanostäbchen eine höhere Sensitivität alsrnGoldnanostäbchen innerhalb des biologischen Fensters besitzen, und darüberhinaus, dass die Sensitivität mit der Dicke der Stäbchen wächst. Ich stelle eine theoretische Diskussion der Sensitivitätrnvor, indenti_ziere die Materialparameter, die die Sensitivität bein_ussen und leite die entsprechendenrnFormeln her. In einer weiteren Annäherung präsentiere ich experimentelle Daten, die die theoretische Erkenntnis unterstützen, dass für Sensitivitätsmessschemata, die auch die Linienbreite mitberücksichtigen, Goldnanostäbchen mit einem Aspektverhältnis von 3 bis 4 das optimalste Ergebnis liefern. Verlässliche Sensoren müssen eine robuste Wiederholbarkeit aufweisen, die ich mit Gold- und Silbernanostäbchen untersuche.rnDie Plasmonen-resonanzwellenlänge hängt von folgenden intrinsischen Materialparametern ab:rnElektrondichte, Hintergrundpolarisierbarkeit und Relaxationszeit. Basierend auf meinen experimentellen Ergebnissen zeige ich, dass Nanostäbchen aus Kupfer-Gold-Legierung im Vergleich zu ähnlich geformten Goldnanostäbchen eine rotverschobene Resonanz haben, und in welcher Weiserndie Linienbreite mit der stochimetrischen Zusammensetzung der legierten Nanopartikeln variiert.rnDie Abhängigkeit der Linienbreite von der Materialzusammensetzung wird auch anhand von silberbeschichteten und unbeschichteten Goldnanostäbchen untersucht.rnHalbleiternanopartikeln sind Kandidaten für e_ziente photovoltaische Einrichtungen. Die Energieumwandlung erfordert eine Ladungstrennung, die mit dem Plasmoscope experimentell vermessen wird, in dem ich die lichtinduzierte Wachstumsdynamik von Goldsphären auf Halbleiternanost äbchen in einer Goldionenlösung durch die Messung der gestreuten Intensität verfolge.rn