864 resultados para Practice-based Approach
Resumo:
Rapport de stage présenté à la Faculté des sciences infirmières en vue de l’obtention du grade de Maître ès en sciences (M.Sc.) en sciences infirmières option formation en sciences infirmières
Resumo:
Short term load forecasting is one of the key inputs to optimize the management of power system. Almost 60-65% of revenue expenditure of a distribution company is against power purchase. Cost of power depends on source of power. Hence any optimization strategy involves optimization in scheduling power from various sources. As the scheduling involves many technical and commercial considerations and constraints, the efficiency in scheduling depends on the accuracy of load forecast. Load forecasting is a topic much visited in research world and a number of papers using different techniques are already presented. The accuracy of forecast for the purpose of merit order dispatch decisions depends on the extent of the permissible variation in generation limits. For a system with low load factor, the peak and the off peak trough are prominent and the forecast should be able to identify these points to more accuracy rather than minimizing the error in the energy content. In this paper an attempt is made to apply Artificial Neural Network (ANN) with supervised learning based approach to make short term load forecasting for a power system with comparatively low load factor. Such power systems are usual in tropical areas with concentrated rainy season for a considerable period of the year
Resumo:
This paper deals with brief overview of the developments of international provisions on IPR related to public health. It discusses flexibilities before and after TRIPS Agreement and difficulties faced by developing countries in implementing TRIPS obligations and protecting public health. Also discussed are the reasons for the Doha Declaration and issues relating to implementation of Para 6 of the Declaration. Discusses the inadequacy in the compulsory licence based approach to solve public health crisis and argues for a more comprehensive approach to find a long term solution to the public health issues
Resumo:
In dieser Arbeit werden nichtüberlappende Gebietszerlegungsmethoden einerseits hinsichtlich der zu lösenden Problemklassen verallgemeinert und andererseits in bisher nicht untersuchten Kontexten betrachtet. Dabei stehen funktionalanalytische Untersuchungen zur Wohldefiniertheit, eindeutigen Lösbarkeit und Konvergenz im Vordergrund. Im ersten Teil werden lineare elliptische Dirichlet-Randwertprobleme behandelt, wobei neben Problemen mit dominantem Hauptteil auch solche mit singulärer Störung desselben, wie konvektions- oder reaktionsdominante Probleme zugelassen sind. Der zweite Teil befasst sich mit (gleichmäßig) monotonen koerziven quasilinearen elliptischen Dirichlet-Randwertproblemen. In beiden Fällen wird das Lipschitz-Gebiet in endlich viele Lipschitz-Teilgebiete zerlegt, wobei insbesondere Kreuzungspunkte und Teilgebiete ohne Außenrand zugelassen sind. Anschließend werden Transmissionsprobleme mit frei wählbaren $L^{\infty}$-Parameterfunktionen hergeleitet, wobei die Konormalenableitungen als Funktionale auf geeigneten Funktionenräumen über den Teilrändern ($H_{00}^{1/2}(\Gamma)$) interpretiert werden. Die iterative Lösung dieser Transmissionsprobleme mit einem Ansatz von Deng führt auf eine Substrukturierungsmethode mit Robin-artigen Transmissionsbedingungen, bei der eine Auswertung der Konormalenableitungen aufgrund einer geschickten Aufdatierung der Robin-Daten nicht notwendig ist (insbesondere ist die bekannte Robin-Robin-Methode von Lions als Spezialfall enthalten). Die Konvergenz bezüglich einer partitionierten $H^1$-Norm wird für beide Problemklassen gezeigt. Dabei werden keine über $H^1$ hinausgehende Regularitätsforderungen an die Lösungen gestellt und die Gebiete müssen keine zusätzlichen Glattheitsvoraussetzungen erfüllen. Im letzten Kapitel werden nichtmonotone koerzive quasilineare Probleme untersucht, wobei das Zugrunde liegende Gebiet nur in zwei Lipschitz-Teilgebiete zerlegt sein soll. Das zugehörige nichtlineare Transmissionsproblem wird durch Kirchhoff-Transformation in lineare Teilprobleme mit nichtlinearen Kopplungsbedingungen überführt. Ein optimierungsbasierter Lösungsansatz, welcher einen geeigneten Abstand der rücktransformierten Dirichlet-Daten der linearen Teilprobleme auf den Teilrändern minimiert, führt auf ein optimales Kontrollproblem. Die dabei entstehenden regularisierten freien Minimierungsprobleme werden mit Hilfe eines Gradientenverfahrens unter minimalen Glattheitsforderungen an die Nichtlinearitäten gelöst. Unter zusätzlichen Glattheitsvoraussetzungen an die Nichtlinearitäten und weiteren technischen Voraussetzungen an die Lösung des quasilinearen Ausgangsproblems, kann zudem die quadratische Konvergenz des Newton-Verfahrens gesichert werden.
Resumo:
Auf dem Gebiet der Strukturdynamik sind computergestützte Modellvalidierungstechniken inzwischen weit verbreitet. Dabei werden experimentelle Modaldaten, um ein numerisches Modell für weitere Analysen zu korrigieren. Gleichwohl repräsentiert das validierte Modell nur das dynamische Verhalten der getesteten Struktur. In der Realität gibt es wiederum viele Faktoren, die zwangsläufig zu variierenden Ergebnissen von Modaltests führen werden: Sich verändernde Umgebungsbedingungen während eines Tests, leicht unterschiedliche Testaufbauten, ein Test an einer nominell gleichen aber anderen Struktur (z.B. aus der Serienfertigung), etc. Damit eine stochastische Simulation durchgeführt werden kann, muss eine Reihe von Annahmen für die verwendeten Zufallsvariablengetroffen werden. Folglich bedarf es einer inversen Methode, die es ermöglicht ein stochastisches Modell aus experimentellen Modaldaten zu identifizieren. Die Arbeit beschreibt die Entwicklung eines parameter-basierten Ansatzes, um stochastische Simulationsmodelle auf dem Gebiet der Strukturdynamik zu identifizieren. Die entwickelte Methode beruht auf Sensitivitäten erster Ordnung, mit denen Parametermittelwerte und Kovarianzen des numerischen Modells aus stochastischen experimentellen Modaldaten bestimmt werden können.
Resumo:
• Aim: The present study aimed to evaluate the effect of trainees’ interpersonal behavior on work involvement (WI) and compared their social behavior within professional and private relationships as well as between different psychotherapeutic orientations. • Methods: The interpersonal scales of the Intrex short-form questionnaire and the Work Involvement Scale (WIS) were used to evaluate two samples of German psychotherapy trainees in psychoanalytic, psychodynamic, and cognitive behavioral therapy training. Trainees from Sample 1 (N = 184) were asked to describe their interpersonal behavior in relation to their patients when filling out the Intrex, whereas trainees from Sample 2 (N = 135) were asked to describe the private relationship with a significant other. • Results: Interpersonal affiliation in professional relationships significantly predicted the level of healing involvement, while stress involvement was predicted by interpersonal affiliation and interdependence in trainees’ relationships with their patients. Social behavior within professional relationships provided higher correlations with WI than private interpersonal behavior. Significant differences were found between private and professional relation settings in trainees’ interpersonal behavior with higher levels of affiliation and interdependence with significant others. Differences between therapeutic orientation and social behavior could only be found when comparing trainees’ level of interdependence with the particular relationship setting. • Conclusion: Trainees’ interpersonal level of affiliation in professional relationships is a predictor for a successful psychotherapeutic development. Vice versa, controlling behavior in professional settings can be understood as a risk factor against psychotherapeutic growth. Both results strengthen an evidence-based approach for competence development during psychotherapy training.
Resumo:
As exploration of our solar system and outerspace move into the future, spacecraft are being developed to venture on increasingly challenging missions with bold objectives. The spacecraft tasked with completing these missions are becoming progressively more complex. This increases the potential for mission failure due to hardware malfunctions and unexpected spacecraft behavior. A solution to this problem lies in the development of an advanced fault management system. Fault management enables spacecraft to respond to failures and take repair actions so that it may continue its mission. The two main approaches developed for spacecraft fault management have been rule-based and model-based systems. Rules map sensor information to system behaviors, thus achieving fast response times, and making the actions of the fault management system explicit. These rules are developed by having a human reason through the interactions between spacecraft components. This process is limited by the number of interactions a human can reason about correctly. In the model-based approach, the human provides component models, and the fault management system reasons automatically about system wide interactions and complex fault combinations. This approach improves correctness, and makes explicit the underlying system models, whereas these are implicit in the rule-based approach. We propose a fault detection engine, Compiled Mode Estimation (CME) that unifies the strengths of the rule-based and model-based approaches. CME uses a compiled model to determine spacecraft behavior more accurately. Reasoning related to fault detection is compiled in an off-line process into a set of concurrent, localized diagnostic rules. These are then combined on-line along with sensor information to reconstruct the diagnosis of the system. These rules enable a human to inspect the diagnostic consequences of CME. Additionally, CME is capable of reasoning through component interactions automatically and still provide fast and correct responses. The implementation of this engine has been tested against the NEAR spacecraft advanced rule-based system, resulting in detection of failures beyond that of the rules. This evolution in fault detection will enable future missions to explore the furthest reaches of the solar system without the burden of human intervention to repair failed components.
Resumo:
In this report, a face recognition system that is capable of detecting and recognizing frontal and rotated faces was developed. Two face recognition methods focusing on the aspect of pose invariance are presented and evaluated - the whole face approach and the component-based approach. The main challenge of this project is to develop a system that is able to identify faces under different viewing angles in realtime. The development of such a system will enhance the capability and robustness of current face recognition technology. The whole-face approach recognizes faces by classifying a single feature vector consisting of the gray values of the whole face image. The component-based approach first locates the facial components and extracts them. These components are normalized and combined into a single feature vector for classification. The Support Vector Machine (SVM) is used as the classifier for both approaches. Extensive tests with respect to the robustness against pose changes are performed on a database that includes faces rotated up to about 40 degrees in depth. The component-based approach clearly outperforms the whole-face approach on all tests. Although this approach isproven to be more reliable, it is still too slow for real-time applications. That is the reason why a real-time face recognition system using the whole-face approach is implemented to recognize people in color video sequences.
Resumo:
This paper describes a general, trainable architecture for object detection that has previously been applied to face and peoplesdetection with a new application to car detection in static images. Our technique is a learning based approach that uses a set of labeled training data from which an implicit model of an object class -- here, cars -- is learned. Instead of pixel representations that may be noisy and therefore not provide a compact representation for learning, our training images are transformed from pixel space to that of Haar wavelets that respond to local, oriented, multiscale intensity differences. These feature vectors are then used to train a support vector machine classifier. The detection of cars in images is an important step in applications such as traffic monitoring, driver assistance systems, and surveillance, among others. We show several examples of car detection on out-of-sample images and show an ROC curve that highlights the performance of our system.
Resumo:
In this paper we present a component based person detection system that is capable of detecting frontal, rear and near side views of people, and partially occluded persons in cluttered scenes. The framework that is described here for people is easily applied to other objects as well. The motivation for developing a component based approach is two fold: first, to enhance the performance of person detection systems on frontal and rear views of people and second, to develop a framework that directly addresses the problem of detecting people who are partially occluded or whose body parts blend in with the background. The data classification is handled by several support vector machine classifiers arranged in two layers. This architecture is known as Adaptive Combination of Classifiers (ACC). The system performs very well and is capable of detecting people even when all components of a person are not found. The performance of the system is significantly better than a full body person detector designed along similar lines. This suggests that the improved performance is due to the components based approach and the ACC data classification structure.
Resumo:
We present a type-based approach to statically derive symbolic closed-form formulae that characterize the bounds of heap memory usages of programs written in object-oriented languages. Given a program with size and alias annotations, our inference system will compute the amount of memory required by the methods to execute successfully as well as the amount of memory released when methods return. The obtained analysis results are useful for networked devices with limited computational resources as well as embedded software.
Resumo:
Considering the difficulty in the insulin dosage selection and the problem of hyper- and hypoglycaemia episodes in type 1 diabetes, dosage-aid systems appear as tremendously helpful for these patients. A model-based approach to this problem must unavoidably consider uncertainty sources such as the large intra-patient variability and food intake. This work addresses the prediction of glycaemia for a given insulin therapy face to parametric and input uncertainty, by means of modal interval analysis. As result, a band containing all possible glucose excursions suffered by the patient for the given uncertainty is obtained. From it, a safer prediction of possible hyper- and hypoglycaemia episodes can be calculated
Resumo:
The scientific community has been suffering from peer review for decades. This process (also called refereeing) subjects an author's scientific work or ideas to the scrutiny of one or more experts in the field. Publishers use it to select and screen manuscript submissions, and funding agencies use it to award research funds. The goal is to get authors to meet their discipline's standards and thus achieve scientific objectivity. Publications and awards that haven't undergone peer review are often regarded with suspicion by scholars and professionals in many fields. However, peer review, although universally used, has many drawbacks. We propose replacing peer review with an auction-based approach: the better the submitted paper, the more scientific currency the author likely bid to have it published. If the bid correctly reflects the paper's quality, the author is rewarded in this new scientific currency; otherwise, the author loses this currency. We argue that citations are an appropriate currency for all scientists. We believe that citation auctions encourage scientists to better control their submissions' quality. It also inspire them to prepare more exciting talks for accepted papers and to invite discussion of their results at congresses and conferences and among their colleagues. In the long run, citation auctions could have the power to greatly improve scientific research
Resumo:
The UK Professional Standards Framework (UK PSF) for teaching and supporting learning, launched in February 2006, is a flexible framework which uses a descriptor-based approach to professional standards. There are three standard descriptors each of which is applicable to a number of staff roles and to different career stages of those engaged in teaching and supporting learning. The standard descriptors are underpinned by areas of professional activity, core knowledge and professional values. The framework provides a reference point for institutions and individuals as well as supporting ongoing development within any one standard descriptor.
Resumo:
El objetivo principal de este trabajo es realizar una revisión teórica de los estudios que han elaborado un análisis acerca de la Inteligencia Emocional con la capacidad para afrontar situaciones generadoras de estrés. Los diferentes estudios muestran que niveles altos en Inteligencia Emocional se relacionan con estrategias de afrontamiento basadas en el análisis y resolución de conflictos, mientras que niveles bajos de inteligencia emocional se relacionan con estrategias de afrontamiento basadas en la evitación, la superstición, y la resistencia al cambio. La evidencia que arrojan los estudios indican que la inteligencia emocional es fundamental en el autocontrol emocional y en la habilidad de adaptación de los individuos para afrontar situaciones generadoras de estrés.