903 resultados para Model Driven Software Development, Arduino, Meta-Modeling, Domain Specific Languages, Software Factory
Resumo:
Chemicals binding to membrane receptors may induce events within the cell changing its behavior. Since these events are simultaneous and hard to be understood by students, we developed a computational model to dynamically and visually explore the cAMP signaling system to facilitate its understanding. The animation is shown in parts, from the hormone-receptor binding to the cellular response. There are some questions to be answered after using the model. The software was field-tested and an evaluation questionnaire (concerning usability, animations, models, and the software as an educational tool) was answered by the students, showing the software to be a valuable aid for content comprehension.
Resumo:
Software faults are expensive and cause serious damage, particularly if discovered late or not at all. Some software faults tend to be hidden. One goal of the thesis is to figure out the status quo in the field of software fault elimination since there are no recent surveys of the whole area. Basis for a structural framework is proposed for this unstructured field, paying attention to compatibility and how to find studies. Bug elimination means are surveyed, including bug knowhow, defect prevention and prediction, analysis, testing, and fault tolerance. The most common research issues for each area are identified and discussed, along with issues that do not get enough attention. Recommendations are presented for software developers, researchers, and teachers. Only the main lines of research are figured out. The main emphasis is on technical aspects. The survey was done by performing searches in IEEE, ACM, Elsevier, and Inspect databases. In addition, a systematic search was done for a few well-known related journals from recent time intervals. Some other journals, some conference proceedings and a few books, reports, and Internet articles have been investigated, too. The following problems were found and solutions for them discussed. Quality assurance is testing only is a common misunderstanding, and many checks are done and some methods applied only in the late testing phase. Many types of static review are almost forgotten even though they reveal faults that are hard to be detected by other means. Other forgotten areas are knowledge of bugs, knowing continuously repeated bugs, and lightweight means to increase reliability. Compatibility between studies is not always good, which also makes documents harder to understand. Some means, methods, and problems are considered method- or domain-specific when they are not. The field lacks cross-field research.
Resumo:
The purpose of this thesis is to develop an environment or network that enables effective collaborative product structure management among stakeholders in each unit, throughout the entire product lifecycle and product data management. This thesis uses framework models as an approach to the problem. Framework model methods for development of collaborative product structure management are proposed in this study, there are three unique models depicted to support collaborative product structure management: organization model, process model and product model. In the organization model, the formation of product data management system (eDSTAT) key user network is specified. In the process model, development is based on the case company’s product development matrix. In the product model framework, product model management, product knowledge management and design knowledge management are defined as development tools and collaboration is based on web-based product structure management. Collaborative management is executed using all these approaches. A case study from an actual project at the case company is presented as an implementation; this is to verify the models’ applicability. A computer assisted design tool and the web-based product structure manager, have been used as tools of this collaboration with the support of the key user. The current PDM system, eDSTAT, is used as a piloting case for key user role. The result of this development is that the role of key user as a collaboration channel is defined and established. The key user is able to provide one on one support for the elevator projects. Also the management activities are improved through the application of process workflow by following criteria for each project milestone. The development shows effectiveness of product structure management in product lifecycle, improved production process by eliminating barriers (e.g. improvement of two-way communication) during design phase and production phase. The key user role is applicable on a global scale in the company.
Resumo:
This work evaluated eight hypsometric models to represent tree height-diameter relationship, using data obtained from the scaling of 118 trees and 25 inventory plots. Residue graphic analysis and percent deviation mean criteria, qui-square test precision, residual standard error between real and estimated heights and the graybill f test were adopted. The identity of the hypsometric models was also verified by applying the F(Ho) test on the plot data grouped to the scaling data. It was concluded that better accuracy can be obtained by using the model prodan, with h and d1,3 data measured in 10 trees by plots grouped into these scaling data measurements of even-aged forest stands.
Resumo:
This study focuses on work commitment creation on rhetorical level, that is to say, the rhetorical and linguistic means that are used to construct or elicit worker commitment. The commitment of the worker is one of the most important objectives of all business communication. There is a strong demand for commitment, identification, or adherence to work in various walks of life, although the actual circumstances are often somewhat insecure and shortsighted. The analysis demonstrates that the actual object of commitment may vary from work itself or work organization to one’s career or professional development. The ideal pattern for commitment appears as comprehensive: it contains affective and rational as well as ideological dimensions. This thesis is a rhetorical discourse analysis, or rhetorical analysis with discourse-analytic influences. Primarily it is a rhetorical analysis in which discourses are observed mainly as tools of a rhetorician. The study also draws on various findings of sociology of work and organizational studies. Research material consists of magazines from three and web pages from six different companies. This study explores repeated discourses in commitment rhetoric, mainly through pointing core concepts and recurrent patterns of argumentation. In this analysis section, a semantic and concept-analytic approach is also employed. Companies talk about ideas, values, feelings and attitudes thus constructing a united and unanimous group and an ideal model of commitment. Probably the most important domain of commitment rhetoric is the construction of group and community. Collective identity is constructed through shared meanings, values and goals, and these rhetorical group constructs that can be used and modified in various ways. Every now and then business communication also focuses on the individual, employing different speakers, positions and discourses associated to them. Constructing and using these positions also paints the picture of an ideal worker and ideal work orientation. For example, the so called entrepreneurship model is frequently used here. Commitment talk and the rhetorical situation it constructs are full of tensions and contradictions; the presence of seemingly contradictory values, goals or identities is constant. This study demonstrates tensions like self-fulfilment and individuality versus conformity, and constant change and development versus dependable establishment, and analyses how they are used, processed and dealt with. An important dimension in commitment rhetoric is the way companies define themselves in respect of current social issues, and how they define themselves as responsible social actors, and how they, in this sense, seek to appear as attractive workplaces. This point of view gives rise to problematic questions as companies process the tensions between, for example, rhetoric and action, ethical ideals and business conditions and so on. For its part, the commitment talk also defines the meaning of waged work in human life. Changing society, changing working life, and changing business environments set new claims and standards for workers and contents of work. In this point of view this research contributes to the study of working life and takes part in current public discussion concerning the meaning, role and future of waged work.
Resumo:
Työssä selvitetään mallipohjaisen suunnittelun ja simulointimallista tuotetun ohjelmakoodin kelpoisuutta tuotekehityskäytössä. Työtapoja tutkitaan, koska halutaan selvittää parantavatko esitetyt toimintatavat aurinkosähkövaihtosuuntaajien ohjelmistokehitystä. Työssä käydään läpi mallipohjaisen suunnittelun työvaiheet, niiden sisältö ja tarkoitus. Aurinkosähköjärjestelmästä muodostetaan simulointimalli, josta tuotetaan maksimitehopisteseuraajan ohjelmakoodi, jonka toiminta testataan aurinkosähkövaihtosuuntaajan ohjausalustan simulaattorissa. Mallipohjainen suunnittelu mahdollistaa ohjelmistotuotekehityksen nopeuttamisen käyttämällä samaa järjestelmää useassa työvaiheessa. Ohjelmakoodin tuottaminen simulointimallista on mahdollista ja hyödyllistä, jos yrityksessä käytetään simulointitestausta säätö- ja ohjausjärjestelmän toiminnan suunnitteluun ja varmentamiseen.
Resumo:
This thesis presents an approach for formulating and validating a space averaged drag model for coarse mesh simulations of gas-solid flows in fluidized beds using the two-fluid model. Proper modeling for fluid dynamics is central in understanding any industrial multiphase flow. The gas-solid flows in fluidized beds are heterogeneous and usually simulated with the Eulerian description of phases. Such a description requires the usage of fine meshes and small time steps for the proper prediction of its hydrodynamics. Such constraint on the mesh and time step size results in a large number of control volumes and long computational times which are unaffordable for simulations of large scale fluidized beds. If proper closure models are not included, coarse mesh simulations for fluidized beds do not give reasonable results. The coarse mesh simulation fails to resolve the mesoscale structures and results in uniform solids concentration profiles. For a circulating fluidized bed riser, such predicted profiles result in a higher drag force between the gas and solid phase and also overestimated solids mass flux at the outlet. Thus, there is a need to formulate the closure correlations which can accurately predict the hydrodynamics using coarse meshes. This thesis uses the space averaging modeling approach in the formulation of closure models for coarse mesh simulations of the gas-solid flow in fluidized beds using Geldart group B particles. In the analysis of formulating the closure correlation for space averaged drag model, the main parameters for the modeling were found to be the averaging size, solid volume fraction, and distance from the wall. The closure model for the gas-solid drag force was formulated and validated for coarse mesh simulations of the riser, which showed the verification of this modeling approach. Coarse mesh simulations using the corrected drag model resulted in lowered values of solids mass flux. Such an approach is a promising tool in the formulation of appropriate closure models which can be used in coarse mesh simulations of large scale fluidized beds.
Resumo:
Technological developments in microprocessors and ICT landscape have made a shift to a new era where computing power is embedded in numerous small distributed objects and devices in our everyday lives. These small computing devices are ne-tuned to perform a particular task and are increasingly reaching our society at every level. For example, home appliances such as programmable washing machines, microwave ovens etc., employ several sensors to improve performance and convenience. Similarly, cars have on-board computers that use information from many di erent sensors to control things such as fuel injectors, spark plug etc., to perform their tasks e ciently. These individual devices make life easy by helping in taking decisions and removing the burden from their users. All these objects and devices obtain some piece of information about the physical environment. Each of these devices is an island with no proper connectivity and information sharing between each other. Sharing of information between these heterogeneous devices could enable a whole new universe of innovative and intelligent applications. The information sharing between the devices is a diffcult task due to the heterogeneity and interoperability of devices. Smart Space vision is to overcome these issues of heterogeneity and interoperability so that the devices can understand each other and utilize services of each other by information sharing. This enables innovative local mashup applications based on shared data between heterogeneous devices. Smart homes are one such example of Smart Spaces which facilitate to bring the health care system to the patient, by intelligent interconnection of resources and their collective behavior, as opposed to bringing the patient into the health system. In addition, the use of mobile handheld devices has risen at a tremendous rate during the last few years and they have become an essential part of everyday life. Mobile phones o er a wide range of different services to their users including text and multimedia messages, Internet, audio, video, email applications and most recently TV services. The interactive TV provides a variety of applications for the viewers. The combination of interactive TV and the Smart Spaces could give innovative applications that are personalized, context-aware, ubiquitous and intelligent by enabling heterogeneous systems to collaborate each other by sharing information between them. There are many challenges in designing the frameworks and application development tools for rapid and easy development of these applications. The research work presented in this thesis addresses these issues. The original publications presented in the second part of this thesis propose architectures and methodologies for interactive and context-aware applications, and tools for the development of these applications. We demonstrated the suitability of our ontology-driven application development tools and rule basedapproach for the development of dynamic, context-aware ubiquitous iTV applications.
Resumo:
This thesis reports investigations on applying the Service Oriented Architecture (SOA) approach in the engineering of multi-platform and multi-devices user interfaces. This study has three goals: (1) analyze the present frameworks for developing multi-platform and multi-devices applications, (2) extend the principles of SOA for implementing a multi-platform and multi-devices architectural framework (SOA-MDUI), (3) applying and validating the proposed framework in the context of a specific application. One of the problems addressed in this ongoing research is the large amount of combinations for possible implementations of applications on different types of devices. Usually it is necessary to take into account the operating system (OS), user interface (UI) including the appearance, programming language (PL) and architectural style (AS). Our proposed approach extended the principles of SOA using patterns-oriented design and model-driven engineering approaches. Synthesizing the present work done in these domains, this research built and tested an engineering framework linking Model-driven Architecture (MDA) and SOA approaches to developing of UI. This study advances general understanding of engineering, deploying and managing multi-platform and multi-devices user interfaces as a service.
Resumo:
Changes in vascular endothelial growth factor (VEGF) in pulmonary vessels have been described in congenital diaphragmatic hernia (CDH) and may contribute to the development of pulmonary hypoplasia and hypertension; however, how the expression of VEGF receptors changes during fetal lung development in CDH is not understood. The aim of this study was to compare morphological evolution with expression of VEGF receptors, VEGFR1 (Flt-1) and VEGFR2 (Flk-1), in pseudoglandular, canalicular, and saccular stages of lung development in normal rat fetuses and in fetuses with CDH. Pregnant rats were divided into four groups (n=20 fetuses each) of four different gestational days (GD) 18.5, 19.5, 20.5, 21.5: external control (EC), exposed to olive oil (OO), exposed to 100 mg nitrofen, by gavage, without CDH (N-), and exposed to nitrofen with CDH (CDH) on GD 9.5 (term=22 days). The morphological variables studied were: body weight (BW), total lung weight (TLW), left lung weight, TLW/BW ratio, total lung volume, and left lung volume. The histometric variables studied were: left lung parenchymal area density and left lung parenchymal volume. VEGFR1 and VEGFR2 expression were determined by Western blotting. The data were analyzed using analysis of variance with the Tukey-Kramer post hoc test. CDH frequency was 37% (80/216). All the morphological and histometric variables were reduced in the N- and CDH groups compared with the controls, and reductions were more pronounced in the CDH group (P<0.05) and more evident on GD 20.5 and GD 21.5. Similar results were observed for VEGFR1 and VEGFR2 expression. We conclude that N- and CDH fetuses showed primary pulmonary hypoplasia, with a decrease in VEGFR1 and VEGFR2 expression.
Resumo:
The study develops an approach that tries to validate software functionality to work systems needs in SMEs. The formulated approach is constructed by using a SAAS based software i.e., work collaboration service (WCS), and SMEs as the elements of study. Where the WCS’s functionality is qualified to the collaboration needs that exist in operational and project work within SMEs. For this research constructivist approach and case study method is selected because the nature of the current study requires an in depth study of the work collaboration service as well as a detailed study of the work systems within different enterprises. Four different companies are selected in which fourteen interviews are conducted to gather data pertaining. The work systems method and framework are used as a central part of the approach to collect, analyze and interpret the enterprises work systems model and the underlying collaboration needs on operational and project work. On the other hand, the functional model of the WCS and its functionality is determined from functional model analysis, software testing, documentation and meetings with the service vendor. The enterprise work system model and the WCS model are compared to reveal how work progression differs between the two and make visible unaddressed stages of work progression. The WCS functionality is compared to work systems collaboration needs to ascertain if the service will suffice the needs of the project and operational work under study. The unaddressed needs provide opportunities to improve the functionality of the service for better conformity to the needs of enterprise and work. The results revealed that the functional models actually differed in how operational and project work progressed within the stages. WCS shared similar stages of work progression apart from the stages of identification and acceptance, and progress and completion stages were only partially addressed. Conclusion is that the identified unaddressed needs such as, single point of reference, SLA and OLA inclusion etc., should be implemented or improved within the WCS at appropriate stages of work to gain better compliance of the service to the needs of the enterprise an work itself. The developed approach can hence be used to carry out similar analysis for the conformance of pre-built software functionality to work system needs with SMEs.
Resumo:
Une cascade de facteurs de transcription composée de SIM1, ARNT2, OTP, BRN2 et SIM2 est requise pour la différenciation des cinq types cellulaires qui peuplent le noyau paraventriculaire (PVN) de l’hypothalamus, un régulateur critique de plusieurs processus physiologiques essentiels à la survie. De plus, l’haploinsuffisance de Sim1 est aussi une cause d’hyperphagie isolée chez la souris et chez l’homme. Nous désirons disséquer le programme développemental du PVN, via une approche intégrative, afin d’identifier de nouveaux gènes qui ont le potentiel de réguler l’homéostasie chez l’individu adulte. Premièrement, nous avons utilisé une approche incluant l’analyse du transcriptome du PVN à différents stades du développement de la souris pour identifier de tels gènes. Nous avons comparé les transcriptomes de l’hypothalamus antérieur chez des embryons de souris Sim1+/+ et Sim1-/- à E12.5 issus de la même portée. De cette manière, nous avons identifié 56 gènes agissant en aval de Sim1 dont 5 facteurs de transcription - Irx3, Sax1, Rxrg, Ror et Neurod6. Nous avons également proposé un modèle de développement à deux couches de l’hypothalamus antérieur. Selon ce modèle, les gènes qui occupent un domaine médial dans la zone du manteau caractérisent des cellules qui peupleront le PVN alors que les gènes qui ont une expression latérale identifient des cellules qui donneront plus tard naissance aux structures ventrolatérales de l’hypothalamus. Nous avons aussi démontré que Sim1 est impliqué à la fois dans la différenciation, la migration et la prolifération des neurones qui peuplent le PVN tout comme Otp. Nous avons également isolé par microdissection au laser le PVN et l’hypothalamus médiobasal chez des souris de type sauvage à E14.5 pour en comparer les transcriptomes. Ceci nous a permis d’identifier 34 facteurs de transcription spécifiques au PVN et 76 facteurs spécifiques à l’hypothalamus médiobasal. Ces gènes représentent des régulateurs potentiels du développement hypothalamique. Deuxièmement, nous avons identifié 3 blocs de séquences au sein de la région 5’ d’Otp qui sont conservés chez l’homme, la souris et le poisson. Nous avons construit un transgène qui est composé d’un fragment de 7 kb contenant ces blocs de séquences et d’un gène rapporteur. L’analyse de 4 lignées de souris a montré que ce transgène est uniquement exprimé dans le PVN en développement. Nous avons généré un deuxième transgène dans lequel le fragment de 7 kb est inséré en amont de l’ADNc de Brn2 ou Sim1 et de Gfp. Nous avons obtenu quatre lignées de souris dans lesquels le profil d’expression de Brn2 et de Gfp reproduit celui d’Otp. Nous étudierons le développement du PVN et la prise alimentaire chez ces souris. En parallèle, nous croisons ces lignées avec les souris déficientes en Sim1 pour déterminer si l’expression de Brn2 permet le développement des cellules du PVN en absence de Sim1. En résumé, nous avons généré le premier transgène qui est exprimé spécifiquement dans le PVN. Ce transgène constitue un outil critique pour la dissection du programme développemental de l’hypothalamus. Troisièmement, nous avons caractérisé le développement de l’hypothalamus antérieur chez l’embryon de poulet qui représente un modèle intéressant pour réaliser des études de perte et de gain de fonction au cours du développement de cette structure. Il faut souligner que le modèle de développement à deux couches de l’hypothalamus antérieur semble être conservé chez l’embryon de poulet où il est aussi possible de classer les gènes selon leur profil d’expression médio-latéral et le devenir des régions qu’ils définissent. Finalement, nous croyons que cette approche intégrative nous permettra d’identifier et de caractériser des régulateurs du développement du PVN qui pourront potentiellement être associés à des pathologies chez l’adulte telles que l’obésité ou l’hypertension.
Évaluation de l'impact clinique et économique du développement d'un traitement pour la schizophrénie
Resumo:
Contexte : Les stratégies pharmacologiques pour traiter la schizophrénie reçoivent une attention croissante due au développement de nouvelles pharmacothérapies plus efficaces, mieux tolérées mais plus coûteuses. La schizophrénie est une maladie chronique présentant différents états spécifiques et définis par leur sévérité. Objectifs : Ce programme de recherche vise à: 1) Évaluer les facteurs associés au risque d'être dans un état spécifique de la schizophrénie, afin de construire les fonctions de risque de la modélisation du cours naturel de la schizophrénie; 2) Développer et valider un modèle de Markov avec microsimulations de Monte-Carlo, afin de simuler l'évolution naturelle des patients qui sont nouvellement diagnostiqués pour la schizophrénie, en fonction du profil individuel des facteurs de risque; 3) Estimer le coût direct de la schizophrénie (pour les soins de santé et autres non reliés aux soins de santé) dans la perspective gouvernementale et simuler l’impact clinique et économique du développement d’un traitement dans une cohorte de patients nouvellement diagnostiqués avec la schizophrénie, suivis pendant les cinq premières années post-diagnostic. Méthode : Pour le premier objectif de ce programme de recherche, un total de 14 320 patients nouvellement diagnostiqués avec la schizophrénie ont été identifiés dans les bases de données de la RAMQ et de Med-Echo. Les six états spécifiques de la schizophrénie ont été définis : le premier épisode (FE), l'état de dépendance faible (LDS), l’état de dépendance élevée (HDS), l’état stable (Stable), l’état de bien-être (Well) et l'état de décès (Death). Pour évaluer les facteurs associés au risque de se trouver dans chacun des états spécifiques de la schizophrénie, nous avons construit 4 fonctions de risque en se basant sur l'analyse de risque proportionnel de Cox pour des risques compétitifs. Pour le deuxième objectif, nous avons élaboré et validé un modèle de Markov avec microsimulations de Monte-Carlo intégrant les six états spécifiques de la schizophrénie. Dans le modèle, chaque sujet avait ses propres probabilités de transition entre les états spécifiques de la schizophrénie. Ces probabilités ont été estimées en utilisant la méthode de la fonction d'incidence cumulée. Pour le troisième objectif, nous avons utilisé le modèle de Markov développé précédemment. Ce modèle inclut les coûts directs de soins de santé, estimés en utilisant les bases de données de la Régie de l'assurance maladie du Québec et Med-Echo, et les coûts directs autres que pour les soins de santé, estimés à partir des enquêtes et publications de Statistique Canada. Résultats : Un total de 14 320 personnes nouvellement diagnostiquées avec la schizophrénie ont été identifiées dans la cohorte à l'étude. Le suivi moyen des sujets était de 4,4 (± 2,6) ans. Parmi les facteurs associés à l’évolution de la schizophrénie, on peut énumérer l’âge, le sexe, le traitement pour la schizophrénie et les comorbidités. Après une période de cinq ans, nos résultats montrent que 41% des patients seront considérés guéris, 13% seront dans un état stable et 3,4% seront décédés. Au cours des 5 premières années après le diagnostic de schizophrénie, le coût direct moyen de soins de santé et autres que les soins de santé a été estimé à 36 701 $ canadiens (CAN) (95% CI: 36 264-37 138). Le coût des soins de santé a représenté 56,2% du coût direct, le coût de l'aide sociale 34,6% et le coût associé à l’institutionnalisation dans les établissements de soins de longue durée 9,2%. Si un nouveau traitement était disponible et offrait une augmentation de 20% de l'efficacité thérapeutique, le coût direct des soins de santé et autres que les soins de santé pourrait être réduit jusqu’à 14,2%. Conclusion : Nous avons identifié des facteurs associés à l’évolution de la schizophrénie. Le modèle de Markov que nous avons développé est le premier modèle canadien intégrant des probabilités de transition ajustées pour le profil individuel des facteurs de risque, en utilisant des données réelles. Le modèle montre une bonne validité interne et externe. Nos résultats indiquent qu’un nouveau traitement pourrait éventuellement réduire les hospitalisations et le coût associé aux établissements de soins de longue durée, augmenter les chances des patients de retourner sur le marché du travail et ainsi contribuer à la réduction du coût de l'aide sociale.
Resumo:
Digitales stochastisches Magnetfeld-Sensorarray Stefan Rohrer Im Rahmen eines mehrjährigen Forschungsprojektes, gefördert von der Deutschen Forschungsgesellschaft (DFG), wurden am Institut für Mikroelektronik (IPM) der Universität Kassel digitale Magnetfeldsensoren mit einer Breite bis zu 1 µm entwickelt. Die vorliegende Dissertation stellt ein aus diesem Forschungsprojekt entstandenes Magnetfeld-Sensorarray vor, das speziell dazu entworfen wurde, um digitale Magnetfelder schnell und auf minimaler Fläche mit einer guten räumlichen und zeitlichen Auflösung zu detektieren. Der noch in einem 1,0µm-CMOS-Prozess gefertigte Test-Chip arbeitet bis zu einer Taktfrequenz von 27 MHz bei einem Sensorabstand von 6,75 µm. Damit ist er das derzeit kleinste und schnellste digitale Magnetfeld-Sensorarray in einem Standard-CMOS-Prozess. Konvertiert auf eine 0,09µm-Technologie können Frequenzen bis 1 GHz erreicht werden bei einem Sensorabstand von unter 1 µm. In der Dissertation werden die wichtigsten Ergebnisse des Projekts detailliert beschrieben. Basis des Sensors ist eine rückgekoppelte Inverter-Anordnung. Als magnetfeldsensitives Element dient ein auf dem Hall-Effekt basierender Doppel-Drain-MAGFET, der das Verhalten der Kippschaltung beeinflusst. Aus den digitalen Ausgangsdaten kann die Stärke und die Polarität des Magnetfelds bestimmt werden. Die Gesamtanordnung bildet einen stochastischen Magnetfeld-Sensor. In der Arbeit wird ein Modell für das Kippverhalten der rückgekoppelten Inverter präsentiert. Die Rauscheinflüsse des Sensors werden analysiert und in einem stochastischen Differentialgleichungssystem modelliert. Die Lösung der stochastischen Differentialgleichung zeigt die Entwicklung der Wahrscheinlichkeitsverteilung des Ausgangssignals über die Zeit und welche Einflussfaktoren die Fehlerwahrscheinlichkeit des Sensors beeinflussen. Sie gibt Hinweise darauf, welche Parameter für das Design und Layout eines stochastischen Sensors zu einem optimalen Ergebnis führen. Die auf den theoretischen Berechnungen basierenden Schaltungen und Layout-Komponenten eines digitalen stochastischen Sensors werden in der Arbeit vorgestellt. Aufgrund der technologisch bedingten Prozesstoleranzen ist für jeden Detektor eine eigene kompensierende Kalibrierung erforderlich. Unterschiedliche Realisierungen dafür werden präsentiert und bewertet. Zur genaueren Modellierung wird ein SPICE-Modell aufgestellt und damit für das Kippverhalten des Sensors eine stochastische Differentialgleichung mit SPICE-bestimmten Koeffizienten hergeleitet. Gegenüber den Standard-Magnetfeldsensoren bietet die stochastische digitale Auswertung den Vorteil einer flexiblen Messung. Man kann wählen zwischen schnellen Messungen bei reduzierter Genauigkeit und einer hohen lokalen Auflösung oder einer hohen Genauigkeit bei der Auswertung langsam veränderlicher Magnetfelder im Bereich von unter 1 mT. Die Arbeit präsentiert die Messergebnisse des Testchips. Die gemessene Empfindlichkeit und die Fehlerwahrscheinlichkeit sowie die optimalen Arbeitspunkte und die Kennliniencharakteristik werden dargestellt. Die relative Empfindlichkeit der MAGFETs beträgt 0,0075/T. Die damit erzielbaren Fehlerwahrscheinlichkeiten werden in der Arbeit aufgelistet. Verglichen mit dem theoretischen Modell zeigt das gemessene Kippverhalten der stochastischen Sensoren eine gute Übereinstimmung. Verschiedene Messungen von analogen und digitalen Magnetfeldern bestätigen die Anwendbarkeit des Sensors für schnelle Magnetfeldmessungen bis 27 MHz auch bei kleinen Magnetfeldern unter 1 mT. Die Messungen der Sensorcharakteristik in Abhängigkeit von der Temperatur zeigen, dass die Empfindlichkeit bei sehr tiefen Temperaturen deutlich steigt aufgrund der Abnahme des Rauschens. Eine Zusammenfassung und ein ausführliches Literaturverzeichnis geben einen Überblick über den Stand der Technik.
Resumo:
Facing the double menace of climate change threats and water crisis, poor communities have now encountered ever more severe challenges in ensuring agricultural productivity and food security. Communities hence have to manage these challenges by adopting a comprehensive approach that not only enhances water resource management, but also adapts agricultural activities to climate variability. Implemented by the Global Environment Facility’s Small Grants Programme, the Community Water Initiative (CWI) has adopted a distinctive approach to support demand-driven, innovative, low cost and community-based water resource management for food security. Experiences from CWI showed that a comprehensive, locally adapted approach that integrates water resources management, poverty reduction, climate adaptation and community empowerment provides a good model for sustainable development in poor rural areas.