951 resultados para WIDE-RANGE CURRENT MEASUREMENT


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Papillary thyroid cancer (PTC) is the most incident histotype of thyroid cancer. A certain fraction of PTC cases (5%) are irresponsive to conventional treatment, and refractory to radioiodine therapy. The current prognostic factors for aggressiveness are mainly based on tumor size, the presence of lymph node metastasis, extrathyroidal invasion and, more recently, the presence of the BRAFT(1799A) mutation. MicroRNAs (miRNAs) have been described as promising molecular markers for cancer as their deregulation is observed in a wide range of tumors. Recent studies indicate that the over-expression of miR-146b-5p is associated with aggressiveness and BRAFT(1799A) mutation. Furthermore, down-regulation of let-7f is observed in several types of tumors, including PTC. In this study, we evaluated the miR146b-5p and let-7f status in a young male patient with aggressive, BRAFT(1799A)-positive papillary thyroid carcinoma, with extensive lymph node metastases and short-time recurrence. The analysis of miR-146b-5p and let-7f expression revealed a distinct pattern from a cohort of PTC patients, suggesting caution in evaluating miRNA expression data as molecular markers of PTC diagnosis and prognosis. Arq Bras Endocrinol Metab. 2012;56(8):552-7

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The deformation of a ring under axial compression is analyzed in order to estimate a favorable ring specimen geometry capable of limiting the influence of friction on the stress-strain curve obtained from SHPB tests. The analysis shows that the use of a ring specimen with a large inner diameter and a small radial thickness offers some advantages comparing with the traditional disk sample. In particular, it can improve the reliability of the test results for ductile materials in the presence of friction. Based on the deformation analysis of a ductile ring under compression, a correction coefficient is proposed to relate the actual material stress strain curve with the reading from the SHPB. It is shown using finite element simulation that the proposed correction can be used for a wide range of conventional ductile materials. Experimental results with steel alloys indicate that the correction procedure is an effective technique for an accurate measurement of the dynamic material strength response. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A small supernumerary marker chromosome (sSMC) derived from chromosome 22 is a relatively common cytogenetic finding. This sSMC typically results in tetrasomy for a chromosomal region that spans the chromosome 22p arm and the proximal 2 Mb of 22q11.21. Using classical cytogenetics, fluorescence in situ hybridization, multiplex ligation-dependent probe amplification, and array techniques, 7 patients with sSMCs derived from chromosome 22 were studied: 4 non-related and 3 from the same family (mother, daughter, and son). The sSMCs in all patients were dicentric and bisatellited chromosomes with breakpoints in the chromosome 22 low-copy repeat A region, resulting in cat eye syndrome (CES) due to chromosome 22 partial tetrasomy 22pter -> q11.2 including the cat eye chromosome region. Although all subjects presented the same chromosomal abnormality, they showed a wide range of phenotypic differences, even in the 3 patients from the same family. There are no previous reports of CES occurring within 3 patients in the same family. Thus, the clinical and follow-up data presented here contribute to a better delineation of the phenotypes and outcomes of CES patients and will be useful for genetic counseling. Copyright (C) 2012 S. Karger AG, Basel

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current SoC design trends are characterized by the integration of larger amount of IPs targeting a wide range of application fields. Such multi-application systems are constrained by a set of requirements. In such scenario network-on-chips (NoC) are becoming more important as the on-chip communication structure. Designing an optimal NoC for satisfying the requirements of each individual application requires the specification of a large set of configuration parameters leading to a wide solution space. It has been shown that IP mapping is one of the most critical parameters in NoC design, strongly influencing the SoC performance. IP mapping has been solved for single application systems using single and multi-objective optimization algorithms. In this paper we propose the use of a multi-objective adaptive immune algorithm (M(2)AIA), an evolutionary approach to solve the multi-application NoC mapping problem. Latency and power consumption were adopted as the target multi-objective functions. To compare the efficiency of our approach, our results are compared with those of the genetic and branch and bound multi-objective mapping algorithms. We tested 11 well-known benchmarks, including random and real applications, and combines up to 8 applications at the same SoC. The experimental results showed that the M(2)AIA decreases in average the power consumption and the latency 27.3 and 42.1 % compared to the branch and bound approach and 29.3 and 36.1 % over the genetic approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Euterpe edulis is an endangered species due to palm heart overharvesting, the most important non-timber forest product of the Brazilian Atlantic Forest, and fruit exploitation has been introduced as a low impacting alternative. However, E. edulis is a keystone species for frugivores birds, and even the impact of fruit exploitation needs to be better investigated. Since this species occurs over contrasting habitats, the establishment of site-specific standards and limits for exploitation may also be essential to achieve truly sustainable management. In this context, we sought to investigate how soil chemical composition would potentially affect E. edulis (Arecaceae) palm heart and fruit exploitation considering current standards of management. We studied natural populations found in Restinga Forest and Atlantic Rainforest remnants established within Natural Reserves of Sao Paulo State, SE Brazil, where 10.24 ha permanent plots, composed of a grid of 256 subplots (20 m x 20 m), were located. In each of these subplots, we evaluated soil chemical composition and diameter at breast height of E. edulis individuals. Additionally, we evaluated fruit yield in 2008 and 2009 in 20 individuals per year. The Atlantic Rainforest population had a much higher proportion of larger diameter individuals than the population from the Restinga Forest, as a result of habitat-mediated effects, especially those related to soil. Sodium and potassium concentration in Restinga Forest soils, which have strong negative and positive effect on palm growth, respectively, played a key role in determining those differences. Overall, the number of fruits that could be exploited in the Atlantic Rainforest was four times higher than in Restinga Forest. If current rules for palm heart and fruit harvesting were followed without any restriction to different habitats, Restinga Forest populations are under severe threat, as this study shows that they are not suitable for sustainable management of both fruits and palm heart. Hence, a habitat-specific approach of sustainable management is needed for this species in order to respect the demographic and ecological dynamics of each population to be managed. These findings suggest that any effort to create general management standards of low impacting harvesting may be unsuccessful if the species of interest occur over a wide range of ecosystems. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spin systems in the presence of disorder are described by two sets of degrees of freedom, associated with orientational (spin) and disorder variables, which may be characterized by two distinct relaxation times. Disordered spin models have been mostly investigated in the quenched regime, which is the usual situation in solid state physics, and in which the relaxation time of the disorder variables is much larger than the typical measurement times. In this quenched regime, disorder variables are fixed, and only the orientational variables are duly thermalized. Recent studies in the context of lattice statistical models for the phase diagrams of nematic liquid-crystalline systems have stimulated the interest of going beyond the quenched regime. The phase diagrams predicted by these calculations for a simple Maier-Saupe model turn out to be qualitative different from the quenched case if the two sets of degrees of freedom are allowed to reach thermal equilibrium during the experimental time, which is known as the fully annealed regime. In this work, we develop a transfer matrix formalism to investigate annealed disordered Ising models on two hierarchical structures, the diamond hierarchical lattice (DHL) and the Apollonian network (AN). The calculations follow the same steps used for the analysis of simple uniform systems, which amounts to deriving proper recurrence maps for the thermodynamic and magnetic variables in terms of the generations of the construction of the hierarchical structures. In this context, we may consider different kinds of disorder, and different types of ferromagnetic and anti-ferromagnetic interactions. In the present work, we analyze the effects of dilution, which are produced by the removal of some magnetic ions. The system is treated in a “grand canonical" ensemble. The introduction of two extra fields, related to the concentration of two different types of particles, leads to higher-rank transfer matrices as compared with the formalism for the usual uniform models. Preliminary calculations on a DHL indicate that there is a phase transition for a wide range of dilution concentrations. Ising spin systems on the AN are known to be ferromagnetically ordered at all temperatures; in the presence of dilution, however, there are indications of a disordered (paramagnetic) phase at low concentrations of magnetic ions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in wireless networking and content delivery systems are enabling new challenging provisioning scenarios where a growing number of users access multimedia services, e.g., audio/video streaming, while moving among different points of attachment to the Internet, possibly with different connectivity technologies, e.g., Wi-Fi, Bluetooth, and cellular 3G. That calls for novel middlewares capable of dynamically personalizing service provisioning to the characteristics of client environments, in particular to discontinuities in wireless resource availability due to handoffs. This dissertation proposes a novel middleware solution, called MUM, that performs effective and context-aware handoff management to transparently avoid service interruptions during both horizontal and vertical handoffs. To achieve the goal, MUM exploits the full visibility of wireless connections available in client localities and their handoff implementations (handoff awareness), of service quality requirements and handoff-related quality degradations (QoS awareness), and of network topology and resources available in current/future localities (location awareness). The design and implementation of the all main MUM components along with extensive on the field trials of the realized middleware architecture confirmed the validity of the proposed full context-aware handoff management approach. In particular, the reported experimental results demonstrate that MUM can effectively maintain service continuity for a wide range of different multimedia services by exploiting handoff prediction mechanisms, adaptive buffering and pre-fetching techniques, and proactive re-addressing/re-binding.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Das experimentelle Studium der 1966 von Gerasimov, Drell undHearn unabhängig voneinander aufgestellten und als GDH-SummenregelbezeichnetenRelation macht die Vermessung totalerPhotoabsorptionswirkungsquerschnitte von zirkular polarisierten Photonen an longitudinalpolarisierten Nukleonen über einen weiten Energiebereich notwendig. Die im Sommer1998 erfolgte Messung am Mainzer Mikrotron stellt das erste derartigeExperiment mit reellen Photonen zur Messung des GDH-Integrals am Protondar. Die Verwendung eines Frozen-Spin-Butanoltargets, das eingesetzt wurde, umeinen möglichst hohen Proton-Polarisationsgrad zu erreichen, hat diezusätzliche experimentelle Schwierigkeit zur Folge, daß die imButanoltarget enthaltenen Kohlenstoffkerne ebenfalls Reaktionsprodukte liefern, diezusammen mit den am Proton erzeugten nachgewiesen werden.Ziel der Arbeit war die Bestimmung von Wirkungsquerschnittenam freien Proton aus Messungen an einem komplexen Target (CH2) wie esbeim polarisiertenTarget vorliegt. Die hierzu durchgeführten Pilotexperimentedienten neben der Entwicklung von Methoden zur Reaktionsidentifikation auchder Eichung des Detektorsystems. Durch die Reproduktion der schon bekanntenund vermessenen unpolarisierten differentiellen und totalenEin-Pion-Wirkungsquerschnitte am Proton (gamma p -> p pi0 und gamma p -> n pi+), die bis zueiner Photonenergievon etwa 400 MeV den Hauptbeitrag zum GDH-Integralausmachen, konnte gezeigt werden, daß eine Separation der Wasserstoff- vonKohlenstoffereignissen möglich ist. Die notwendigen Techniken hierzu wurden imRahmen dieser Arbeit zu einem allgemein nutzbaren Werkzeug entwickelt.Weiterhin konnte gezeigt werden, daß der vom Kohlenstoffstammende Anteil der Reaktionen keine Helizitätsabhängigkeit besitzt. Unterdieser Voraussetzung reduziert sich die Bestimmung der helizitätsabhängigenWirkungsquerschnittsdifferenz auf eine einfacheDifferenzbildung. Aus den erhaltenen Ergebnissen der intensiven Analyse von Daten, diemit einem unpolarisierten Target erhalten wurden, konnten so schnellerste Resultate für Messungen, die mit dem polarisierten Frozen-Spin-Targetaufgenommen wurden, geliefert werden. Es zeigt sich, daß sich dieseersten Resultate für polarisierte differentielle und totale (gammaN)-Wirkungsquerschnitte im Delta-Bereich in guter Übereinstimmung mit theoretischenAnalysen befinden.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Metallische Objekte in der Größenordnung der optischen Wellenlänge zeigen Resonanzen im optischen Spektralbereich. Mit einer Kombination aus Kolloidlithographie, Metallfilmbedampfung und reaktivem Ionenstrahl¨atzen wurden Nanosicheln aus Gold bzw. Silber mit identischer Form und Orientierung in Sichelform mit einer Größe von 60nm bis 400nm hergestellt. Der Öffnungswinkel der Nanosicheln lässt sich kontinuierlich einstellen. Durch die einheitliche Orientierung lassen sich Messungen am Ensemble direkt auf das Verhalten des Einzelobjektes übertragen, wie ein Vergleich der Extinktionsspektren einer Ensemblemessung am UV/Vis/NIR-Spektrometer mit einer Einzelpartikelmessung in einem konfokalen Mikroskop zeigt. Die optische Antwort der Nanosicheln wurde als zwei-dimensionales Modell mit einer Finite Elemente Methode berechnet. Das Ergebnis sind mehrere polarisationsabhängige Resonanzen im optischen Spektrum. Diese lassen sich durch Variation des Öffnungswinkels und der Gr¨oße der Nanosichel verschieben. Durch Beleuchten lassen sich plasmonische Schwingungen anregen, die ein stark lokalisiertes Nahfeld an den Spitzen und in der Öffnung der Nanosicheln erzeugen. Das Nahfeld der Partikelresonanz wurde mit einer Fotolackmethode nachgewiesen. Die Untersuchungen am UV/Vis/NIR-Spektrometer zeigen mehrere polarisationsabhängige Resonanzen im Spektralbereich von 300 nm bis 3200 nm. Die Resonanzen der Nanosicheln lassen sich durch den Öffnungswinkel und den Durchmesser in der Größenordnung der Halbwertbreite im optischen Spektrum verschieben. In der Anwendung als Chemo- bzw. Biosensor zeigen Gold-Nanosicheln eine ähnliche Empfindlichkeit wie vergleichbare Sensoren auf der Basis von dünnen Metallstrukturen. Das Nahfeld zeichnet sich durch eine starke Lokalisierung aus und dringt, je nach Multipolordnung, zwischen 14 nm und 70 nm in die Umgebung ein. Quantenpunkte wurden an das Nahfeld der Nanosicheln gekoppelt. Die Emission der Quantenpunkte bei einer Wellenlänge von 860nm wird durch die Resonanz der Nanosicheln verstärkt. Die Nanosicheln wurden als optische Pinzette eingesetzt. Bei einer Anregung mit einem Laser bei einer Wellenlänge von 1064 nm wurden Polystyrolkolloide mit einem Durchmesser von 40 nm von den resonanten Nanosicheln eingefangen. Die Nanosicheln zeigen außergewöhnliche optische Eigenschaften, die mithilfe der Geometrieparameter über einen großen Bereich verändert werden können. Die ersten Anwendungen haben Anknüpfungspunkte zur Verwendung in der Sensorik, Fluoreszenzspektroskopie und als optische Pinzette aufgezeigt.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The surprising discovery of the X(3872) resonance by the Belle experiment in 2003, and subsequent confirmation by BaBar, CDF and D0, opened up a new chapter of QCD studies and puzzles. Since then, detailed experimental and theoretical studies have been performed in attempt to determine and explain the proprieties of this state. Since the end of 2009 the world’s largest and highest-energy particle accelerator, the Large Hadron Collider (LHC), started its operations at the CERN laboratories in Geneva. One of the main experiments at LHC is CMS (Compact Muon Solenoid), a general purpose detector projected to address a wide range of physical phenomena, in particular the search of the Higgs boson, the only still unconfirmed element of the Standard Model (SM) of particle interactions and, new physics beyond the SM itself. Even if CMS has been designed to study high energy events, it’s high resolution central tracker and superior muon spectrometer made it an optimal tool to study the X(3872) state. In this thesis are presented the results of a series of study on the X(3872) state performed with the CMS experiment. Already with the first year worth of data, a clear peak for the X(3872) has been identified, and the measurement of the cross section ratio with respect to the Psi(2S) has been performed. With the increased statistic collected during 2011 it has been possible to study, in bins of transverse momentum, the cross section ratio between X(3872) and Psi(2S) and separate their prompt and non-prompt component.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Flood disasters are a major cause of fatalities and economic losses, and several studies indicate that global flood risk is currently increasing. In order to reduce and mitigate the impact of river flood disasters, the current trend is to integrate existing structural defences with non structural measures. This calls for a wider application of advanced hydraulic models for flood hazard and risk mapping, engineering design, and flood forecasting systems. Within this framework, two different hydraulic models for large scale analysis of flood events have been developed. The two models, named CA2D and IFD-GGA, adopt an integrated approach based on the diffusive shallow water equations and a simplified finite volume scheme. The models are also designed for massive code parallelization, which has a key importance in reducing run times in large scale and high-detail applications. The two models were first applied to several numerical cases, to test the reliability and accuracy of different model versions. Then, the most effective versions were applied to different real flood events and flood scenarios. The IFD-GGA model showed serious problems that prevented further applications. On the contrary, the CA2D model proved to be fast and robust, and able to reproduce 1D and 2D flow processes in terms of water depth and velocity. In most applications the accuracy of model results was good and adequate to large scale analysis. Where complex flow processes occurred local errors were observed, due to the model approximations. However, they did not compromise the correct representation of overall flow processes. In conclusion, the CA model can be a valuable tool for the simulation of a wide range of flood event types, including lowland and flash flood events.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die Verifikation numerischer Modelle ist für die Verbesserung der Quantitativen Niederschlagsvorhersage (QNV) unverzichtbar. Ziel der vorliegenden Arbeit ist die Entwicklung von neuen Methoden zur Verifikation der Niederschlagsvorhersagen aus dem regionalen Modell der MeteoSchweiz (COSMO-aLMo) und des Globalmodells des Europäischen Zentrums für Mittelfristvorhersage (engl.: ECMWF). Zu diesem Zweck wurde ein neuartiger Beobachtungsdatensatz für Deutschland mit stündlicher Auflösung erzeugt und angewandt. Für die Bewertung der Modellvorhersagen wurde das neue Qualitätsmaß „SAL“ entwickelt. Der neuartige, zeitlich und räumlich hoch-aufgelöste Beobachtungsdatensatz für Deutschland wird mit der während MAP (engl.: Mesoscale Alpine Program) entwickelten Disaggregierungsmethode erstellt. Die Idee dabei ist, die zeitlich hohe Auflösung der Radardaten (stündlich) mit der Genauigkeit der Niederschlagsmenge aus Stationsmessungen (im Rahmen der Messfehler) zu kombinieren. Dieser disaggregierte Datensatz bietet neue Möglichkeiten für die quantitative Verifikation der Niederschlagsvorhersage. Erstmalig wurde eine flächendeckende Analyse des Tagesgangs des Niederschlags durchgeführt. Dabei zeigte sich, dass im Winter kein Tagesgang existiert und dies vom COSMO-aLMo gut wiedergegeben wird. Im Sommer dagegen findet sich sowohl im disaggregierten Datensatz als auch im COSMO-aLMo ein deutlicher Tagesgang, wobei der maximale Niederschlag im COSMO-aLMo zu früh zwischen 11-14 UTC im Vergleich zu 15-20 UTC in den Beobachtungen einsetzt und deutlich um das 1.5-fache überschätzt wird. Ein neues Qualitätsmaß wurde entwickelt, da herkömmliche, gitterpunkt-basierte Fehlermaße nicht mehr der Modellentwicklung Rechnung tragen. SAL besteht aus drei unabhängigen Komponenten und basiert auf der Identifikation von Niederschlagsobjekten (schwellwertabhängig) innerhalb eines Gebietes (z.B. eines Flusseinzugsgebietes). Berechnet werden Unterschiede der Niederschlagsfelder zwischen Modell und Beobachtungen hinsichtlich Struktur (S), Amplitude (A) und Ort (L) im Gebiet. SAL wurde anhand idealisierter und realer Beispiele ausführlich getestet. SAL erkennt und bestätigt bekannte Modelldefizite wie das Tagesgang-Problem oder die Simulation zu vieler relativ schwacher Niederschlagsereignisse. Es bietet zusätzlichen Einblick in die Charakteristiken der Fehler, z.B. ob es sich mehr um Fehler in der Amplitude, der Verschiebung eines Niederschlagsfeldes oder der Struktur (z.B. stratiform oder kleinskalig konvektiv) handelt. Mit SAL wurden Tages- und Stundensummen des COSMO-aLMo und des ECMWF-Modells verifiziert. SAL zeigt im statistischen Sinne speziell für stärkere (und damit für die Gesellschaft relevante Niederschlagsereignisse) eine im Vergleich zu schwachen Niederschlägen gute Qualität der Vorhersagen des COSMO-aLMo. Im Vergleich der beiden Modelle konnte gezeigt werden, dass im Globalmodell flächigere Niederschläge und damit größere Objekte vorhergesagt werden. Das COSMO-aLMo zeigt deutlich realistischere Niederschlagsstrukturen. Diese Tatsache ist aufgrund der Auflösung der Modelle nicht überraschend, konnte allerdings nicht mit herkömmlichen Fehlermaßen gezeigt werden. Die im Rahmen dieser Arbeit entwickelten Methoden sind sehr nützlich für die Verifikation der QNV zeitlich und räumlich hoch-aufgelöster Modelle. Die Verwendung des disaggregierten Datensatzes aus Beobachtungen sowie SAL als Qualitätsmaß liefern neue Einblicke in die QNV und lassen angemessenere Aussagen über die Qualität von Niederschlagsvorhersagen zu. Zukünftige Anwendungsmöglichkeiten für SAL gibt es hinsichtlich der Verifikation der neuen Generation von numerischen Wettervorhersagemodellen, die den Lebenszyklus hochreichender konvektiver Zellen explizit simulieren.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation addresses the staminal lever mechanism of the genus Salvia. Various hypotheses referring to its purpose and function are tested and elucidated. The first hypothesis maintains that the lever is a mechanical selection mechanism which excludes weak pollinators from the flower. This hypothesis is refuted and the respective results of force measurements and morphological investigations are presented, statistically evaluated and discussed. The force measurements and morphological investigations were conducted on the staminal levers and flowers of 8 bee pollinated (melittophilous) and 6 bird pollinated (ornithophilous) species. For comparison a ninth melittophilous species that lacks the staminal lever was investigated. In this species the force measurements were conducted on floral structures that were suspected to hinder a flower visitor. The hypotheses, which state that the staminal lever is a tool for pollen portioning and reduces the risk of pollen loss as well as hybridisation due to its ability to perform a repeatable, accurate and species-specific pollen placement on a wide range of diverse pollinators, are confirmed. Investigations with respect to pollen portioning were carried out on 13 sages. The lever mechanism can be released several times in a row, while the pollen sacs leave a dosed pollen portion on a well defined spot on the pollinator‘s body. Pollen placement was investigated for 12 sages. In sympatric sages, lever length and the area of pollen placement are of particular interest. A shared pollinator bears species-specific areas of pollen placement for different sages. The accurate pollen placement ensures an efficient pollination. However, the question of the functionality of the lever mechanism can not be answered with absolute certainty. The lever‘s backswing is not caused by the adaxial lever arm; the adaxial lever arm is too light and too short to be an adequate counterweight to the abaxial lever arm. Therefore, the adaxial lever arm can not pull the abaxial lever arm to return it to its neutral position. But there are indications of a cellular mainspring in the filament. According to the current state of knowledge, this is the most plausible explanation for the lever's backswing, but further histological investigations on the joint of the lever mechanism are necessary to confirm this assumption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Im Rahmen dieser Arbeit wurde ein neuer Eiskeimzähler FINCH (Fast Ice Nucleus CHamber) entwickelt und erste Messungen von verschiedenen Testaerosolen im Labor und atmosphärischem Aerosol durchgeführt. Die Aerosolpartikel bzw. Ice Nuclei IN werden bei Temperaturen unter dem Gefrierpunkt und Übersättigungen in Bezug auf Eis zum Anwachsen zu Eiskristallen gebracht, um sie mittels optischer Detektion zu erfassen. In FINCH ist dies durch das Prinzip der Mischung realisiert, wodurch eine kontinuierliche Messung der IN-Anzahlkonzentration gewährleistet ist. Hierbei kann mit sehr hohen Sammelflussraten von bis zu 10 l/min gemessen werden. Ebenso ist ein schnelles Abfahren von verschiedenen Sättigungsverhältnissen in Bezug auf Eis in einem weiten Bereich von 0.9 - 1.7 bei konstanten Temperaturen bis zu −23 °C möglich. Die Detektion der Eiskristalle und damit der Bestimmung der IN-Anzahlkonzentration erfolgt über einen neu entwickelten optischen Sensor basierend auf der unterschiedlichen Depolarisation des zurückgestreuten Lichtes von Eiskristallen und unterkühlten Tropfen. In Labermessungen wurden Aktivierungstemperatur und -sättigungsverhältnis von Silberjodid AgI und Kaolinit vermessen. Die Resultate zeigten gute Übereinstimmungen mit Ergebnissen aus der Literatur sowie Parallelmessungen mit FRIDGE (FRankfurt Ice Deposition freezinG Experiment). FRIDGE ist eine statische Diffusionskammer zur Aktivierung und Auszählung von Eiskeimen, die auf einem Filter gesammelt wurden. Bei atmosphärischen Messungen auf dem Jungfraujoch(Schweiz) lagen die IN-Anzahlkonzentrationen mit bis zu 4 l−1 im Rahmen der aus der Literatur bekannten Werte. Messungen der Eiskristallresiduen von Mischwolken zeigten hingegen, dass nur jedes tausendste als Eiskeim im Depositionsmode aktiv ist. Hier scheinen andere Gefrierprozesse und sekundäre Eiskristallbildung von sehr großer Bedeutung für die Anzahlkonzentration der Eiskristallresiduen zu sein. Eine weitere Messung von atmosphärischem Aerosol in Frankfurt zeigte IN-Anzahlkonzentrationen bis zu 30 l−1 bei Aktivierungstemperaturen um −14 °C. Die parallele Probenahme auf Siliziumplättchen für die Messungen der IN-Anzahlkonzentration in FRIDGE ergaben Werte im gleichen Anzahlkonzentrationsbereich.