861 resultados para Pedagogical principles and music
Resumo:
The present thesis investigates the issue of work-family conflict and facilitation in a sanitarian contest, using the DISC Model (De Jonge and Dormann, 2003, 2006). The general aim has been declined in two empirical studies reported in this dissertation chapters. Chapter 1 reporting the psychometric properties of the Demand-Induced Strain Compensation Questionnaire. Although the empirical evidence on the DISC Model has received a fair amount of attention in literature both for the theoretical principles and for the instrument developed to display them (DISQ; De Jonge, Dormann, Van Vegchel, Von Nordheim, Dollard, Cotton and Van den Tooren, 2007) there are no studies based solely on psychometric investigation of the instrument. In addition, no previous studies have ever used the DISC as a model or measurement instrument in an Italian context. Thus the first chapter of the present dissertation was based on psychometric investigation of the DISQ. Chapter 2 reporting a longitudinal study contribution. The purpose was to examine, using the DISC model, the relationship between emotional job characteristics, work-family interface and emotional exhaustion among a health care population. We started testing the Triple Match Principle of the DISC Model using solely the emotional dimension of the strain-stress process (i.e. emotional demands, emotional resources and emotional exhaustion). Then we investigated the mediator role played by w-f conflict and w-f facilitation in relation to emotional job characteristics and emotional exhaustion. Finally we compared the mediator model across workers involved in chronic illness home demands and workers who are not involved. Finally, a general conclusion, integrated and discussed the main findings of the studies reported in this dissertation.
Resumo:
Die Dissertation untersucht die geistige Produktion im Erziehungssystem anhand des Unterrichtsgegenstands populäre Musik. Hiermit ist sie im Kernbereich der musikpädagogischen Disziplin angesiedelt – Musik und Schule. Ferner rückt die Festlegung auf populäre Musik den Schüler in seinem Alltagswissen in den Vordergrund der Betrachtung. Die Frage nach dem Umgang mit populärer Musik ist somit indirekt eine Frage nach dem Umgang mit schülernahen Erfahrungswelten in der Schule. Innerhalb dieses Forschungsprofils erhält die Arbeit ihre eigentliche Relevanz - sie zeigt auf, wie eine moderne, selbstreferentielle Musikpädagogik eigene bedeutsame Kommunikationen beobachten kann. Entworfen in Anlehnung an die Systemtheorie nach Niklas Luhmann, werden in der Arbeit die unikalen Reflexionszusammenhänge von Pädagogik und Musikpädagogik anhand der folgenden Operationsfelder offengelegt: pädagogische und musikpädagogische Fachliteratur, Lehrpläne und Schulbücher. Nach Luhmann ist es erforderlich verstehend in die Unikalität systemischer Reflexionsleistungen einzudringen, um inkonsistente Anforderungen an die Aufgabe (Musik-)Erziehung und ihre Gegenstände aufzudecken und zukünftige Systemhandlungen zu optimieren. Die Arbeit ist in drei große historische Zeitblöcke gegliedert, die ihrerseits in verschiedene disziplinäre Operationsfelder unterteilt sind. Mit Hilfe dieser zweidimensionalen historisch-interdisziplinären Sichtweise wird populäre Musik als Bezugsgröße aufgewiesen, an der die zentralen Debatten von Pädagogik und Musikpädagogik kondensieren. Anhand von Schlüsselbegriffen wie Kultur, Gesellschaft und Ästhetik aber auch didaktischen Prinzipien wie Schüler- und Handlungsorientierung oder ganzheitliche (Musik-)Pädagogik lässt sich die Vielfalt historisch gewachsener inkonsistenter/konsistenter Forderungen belegen. Aus den Beobachtungen im Umgang mit populärer Musik werden Aufgaben deutlich, die die Disziplinen, vor allem die Musikpädagogik, in der Zukunft zu leisten haben. Diese beschäftigen sich auf der einen Seite mit dem disziplinären Selbstverständnis und auf der anderen Seite mit unbeantworteten didaktischen Fragestellungen wie den Möglichkeiten und Grenzen des einzelnen populären Musikstücks im konkret-situativen Lernkontext von Musikunterricht.
Resumo:
Mainstream hardware is becoming parallel, heterogeneous, and distributed on every desk, every home and in every pocket. As a consequence, in the last years software is having an epochal turn toward concurrency, distribution, interaction which is pushed by the evolution of hardware architectures and the growing of network availability. This calls for introducing further abstraction layers on top of those provided by classical mainstream programming paradigms, to tackle more effectively the new complexities that developers have to face in everyday programming. A convergence it is recognizable in the mainstream toward the adoption of the actor paradigm as a mean to unite object-oriented programming and concurrency. Nevertheless, we argue that the actor paradigm can only be considered a good starting point to provide a more comprehensive response to such a fundamental and radical change in software development. Accordingly, the main objective of this thesis is to propose Agent-Oriented Programming (AOP) as a high-level general purpose programming paradigm, natural evolution of actors and objects, introducing a further level of human-inspired concepts for programming software systems, meant to simplify the design and programming of concurrent, distributed, reactive/interactive programs. To this end, in the dissertation first we construct the required background by studying the state-of-the-art of both actor-oriented and agent-oriented programming, and then we focus on the engineering of integrated programming technologies for developing agent-based systems in their classical application domains: artificial intelligence and distributed artificial intelligence. Then, we shift the perspective moving from the development of intelligent software systems, toward general purpose software development. Using the expertise maturated during the phase of background construction, we introduce a general-purpose programming language named simpAL, which founds its roots on general principles and practices of software development, and at the same time provides an agent-oriented level of abstraction for the engineering of general purpose software systems.
Resumo:
La presente tesi intende compiere un’indagine, dal punto di vista storico-educativo, sulla storia delle case di correzione. Nello specifico si è tentato di analizzare il “caso” del Discolato bolognese, operando un confronto con altre analoghe istituzioni, sorte in due città campione, come Roma e Milano, per quanto riguarda il contesto italiano, e con la situazione inglese, per quanto riguarda il contesto internazionale. Il focus della ricerca si è incentrato sul rapporto tra devianza e internamento, considerato secondo una declinazione pedagogica. A tal proposito si è cercato di far emergere le modalità educative, nonché i principi e le finalità che sottostavano agli interventi istituzionali, non solo soffermandosi su quanto i regolamenti e gli statuti prescrivevano all’interno delle strutture correzionali, ma analizzando la reclusione nella sua effettiva organizzazione quotidiana. Per quanto riguarda la situazione bolognese è stata analizzata un’ampissima documentazione, conservata presso l’Archivio storico Provinciale di Bologna, che ha permesso di svolgere un’analisi di tipo quantitativo su un totale di 1105 individui al fine di delineare le principali caratteristiche demografiche e sociali delle persone internate nel Discolato bolognese. L’interpretazione delle fonti ha permesso anche un’indagine qualitativa, nel tentativo di ricostruire le storie di vita dei singoli reclusi. Da questa ricerca emerge un’immagine complessa delle case di correzione. Nel corso dei secoli, in modo particolare nelle diverse realtà prese in esame, esse hanno assunto caratteristiche e peculiarità differenti. La loro inefficacia fu la motivazione principale che condusse alla definitiva chiusura, quando si fece via via sempre più evidente la difficoltà a tradurre in pratica ciò che regolamenti e statuti - a livello teorico - prescrivevano.
Resumo:
This thesis aims to give a general view of pavement types all over the world, by showing the different characteristics of each one and its different life steps starting from construction, passing by maintenance and arriving until recycling phase. The flexible pavement took the main part of this work because it has been used in the last part of this thesis to design a project of a rural road. This project is located in the province of Bologna-Italy (‘Comune di Argelato’, 26 km in the north of Bologna), and has 5677, 81 m of length. A pavement design was made using the program BISAR 3.0 and a fatigue life study was made, also, in order to estimate the number of loads (in terms of heavy vehicles axle) to cause road’s failure . An alignment design was made for this project and a safety study was established in order to check if the available sight distance at curves respects the safety norms or not, by comparing it to the stopping sight distance. Different technical sheets are demonstrated and several cases are discussed in order to clarify the main design principles and underline the main hazardous cases to be avoided especially at intersection. This latter, its type’s choice depends on several factors in order to make the suitable design according to the environmental data. At this part of the road, the safety is a primordial point due to the high accident rate in this zone. For this reason, different safety aspects are discussed especially at roundabouts, signalized intersections, and also some other common intersection types. The design and the safety norms are taken with reference to AASHTO (American Association of State Highway and Transportation Officials), ACT (Transportation Association of Canada), and also according to Italian norms (Decreto Ministeriale delle Starde).
Resumo:
This thesis work encloses activities carried out in the Laser Center of the Polytechnic University of Madrid and the laboratories of the University of Bologna in Forlì. This thesis focuses on the superficial mechanical treatment for metallic materials called Laser Shock Peening (LSP). This process is a surface enhancement treatment which induces a significant layer of beneficial compressive residual stresses underneath the surface of metal components in order to improve the detrimental effects of the crack growth behavior rate in it. The innovation aspect of this work is the LSP application to specimens with extremely low thickness. In particular, after a bibliographic study and comparison with the main treatments used for the same purposes, this work analyzes the physics of the operation of a laser, its interaction with the surface of the material and the generation of the surface residual stresses which are fundamentals to obtain the LSP benefits. In particular this thesis work regards the application of this treatment to some Al2024-T351 specimens with low thickness. Among the improvements that can be obtained performing this operation, the most important in the aeronautic field is the fatigue life improvement of the treated components. As demonstrated in this work, a well-done LSP treatment can slow down the progress of the defects in the material that could lead to sudden failure of the structure. A part of this thesis is the simulation of this phenomenon using the program AFGROW, with which have been analyzed different geometric configurations of the treatment, verifying which was better for large panels of typical aeronautical interest. The core of the LSP process are the residual stresses that are induced on the material by the interaction with the laser light, these can be simulated with the finite elements but it is essential to verify and measure them experimentally. In the thesis are introduced the main methods for the detection of those stresses, they can be mechanical or by diffraction. In particular, will be described the principles and the detailed realization method of the Hole Drilling measure and an introduction of the X-ray Diffraction; then will be presented the results I obtained with both techniques. In addition to these two measurement techniques will also be introduced Neutron Diffraction method. The last part refers to the experimental tests of the fatigue life of the specimens, with a detailed description of the apparatus and the procedure used from the initial specimen preparation to the fatigue test with the press. Then the obtained results are exposed and discussed.
Resumo:
Cities are key locations where Sustainability needs to be addressed at all levels, as land is a finite resource. However, not all urban spaces are exploited at best, and land developers often evaluate unused, misused, or poorly-designed urban portions as impracticable constraints. Further, public authorities lose the challenge to enable and turn these urban spaces into valuable opportunities where Sustainable Urban Development may flourish. Arguing that these spatial elements are at the centre of SUD, the paper elaborates a prototype in the form of a conceptual strategic planning framework, committed to an effective recycling of the city spaces using a flexible and multidisciplinary approach. Firstly, the research focuses upon a broad review of Sustainability literature, highlighting established principles and guidelines, building a sound theoretical base for the new concept. Hence, it investigates origins, identifies and congruently suggests a definition, characterisation and classification for urban “R-Spaces”. Secondly, formal, informal and temporary fitting functions are analysed and inserted into a portfolio meant to enhance adaptability and enlarge the choices for the on-site interventions. Thirdly, the study outlines ideal quality requirements for a sustainable planning process. Then, findings are condensed in the proposal, which is articulated in the individuation of tools, actors, plans, processes and strategies. Afterwards, the prototype is tested upon case studies: Solar Community (Casalecchio di Reno, Bologna) and Hyllie Sustainable City Project, the latter developed via an international workshop (ACSI-Camp, Malmö, Sweden). Besides, the qualitative results suggest, inter alia, the need to right-size spatial interventions, separate structural and operative actors, involve synergies’ multipliers and intermediaries (e.g. entrepreneurial HUBs, innovation agencies, cluster organisations…), maintain stakeholders’ diversity and create a circular process open for new participants. Finally, the paper speculates upon a transfer of the Swedish case study to Italy, and then indicates desirable future researches to favour the prototype implementation.
Resumo:
Over the last decade, translational science has come into the focus of academic medicine, and significant intellectual and financial efforts have been made to initiate a multitude of bench-to-bedside projects. The quest for suitable biomarkers that will significantly change clinical practice has become one of the biggest challenges in translational medicine. Quantitative measurement of proteins is a critical step in biomarker discovery. Assessing a large number of potential protein biomarkers in a statistically significant number of samples and controls still constitutes a major technical hurdle. Multiplexed analysis offers significant advantages regarding time, reagent cost, sample requirements and the amount of data that can be generated. The two contemporary approaches in multiplexed and quantitative biomarker validation, antibody-based immunoassays and MS-based multiple (or selected) reaction monitoring, are based on different assay principles and instrument requirements. Both approaches have their own advantages and disadvantages and therefore have complementary roles in the multi-staged biomarker verification and validation process. In this review, we discuss quantitative immunoassay and multiple reaction monitoring/selected reaction monitoring assay principles and development. We also discuss choosing an appropriate platform, judging the performance of assays, obtaining reliable, quantitative results for translational research and clinical applications in the biomarker field.
Resumo:
The quality of dental care and modern achievements in dental science depend strongly on understanding the properties of teeth and the basic principles and mechanisms involved in their interaction with surrounding media. Erosion is a disorder to which such properties as structural features of tooth, physiological properties of saliva, and extrinsic and intrinsic acidic sources and habits contribute, and all must be carefully considered. The degree of saturation in the surrounding solution, which is determined by pH and calcium and phosphate concentrations, is the driving force for dissolution of dental hard tissue. In relation to caries, with the calcium and phosphate concentrations in plaque fluid, the 'critical pH' below which enamel dissolves is about 5.5. For erosion, the critical pH is lower in products (e.g. yoghurt) containing more calcium and phosphate than plaque fluid and higher when the concentrations are lower. Dental erosion starts by initial softening of the enamel surface followed by loss of volume with a softened layer persisting at the surface of the remaining tissue. Dentine erosion is not clearly understood, so further in vivo studies, including histopathological aspects, are needed. Clinical reports show that exposure to acids combined with an insufficient salivary flow rate results in enhanced dissolution. The effects of these and other interactions result in a permanent ion/substance exchange and reorganisation within the tooth material or at its interface, thus altering its strength and structure. The rate and severity of erosion are determined by the susceptibility of the dental tissues towards dissolution. Because enamel contains less soluble mineral than dentine, it tends to erode more slowly. The chemical mechanisms of erosion are also summarised in this review. Special attention is given to the microscopic and macroscopic histopathology of erosion.
Resumo:
The theory of ecological speciation suggests that assortative mating evolves most easily when mating preferences are;directly linked to ecological traits that are subject to divergent selection. Sensory adaptation can play a major role in this process,;because selective mating is often mediated by sexual signals: bright colours, complex song, pheromone blends and so on. When;divergent sensory adaptation affects the perception of such signals, mating patterns may change as an immediate consequence.;Alternatively, mating preferences can diverge as a result of indirect effects: assortative mating may be promoted by selection;against intermediate phenotypes that are maladapted to their (sensory) environment. For Lake Victoria cichlids, the visual environment;constitutes an important selective force that is heterogeneous across geographical and water depth gradients. We investigate;the direct and indirect effects of this heterogeneity on the evolution of female preferences for alternative male nuptial colours;(red and blue) in the genus Pundamilia. Here, we review the current evidence for divergent sensory drive in this system, extract;general principles, and discuss future perspectives
Resumo:
The authors examined the effects of age, musical experience, and characteristics of musical stimuli on a melodic short-term memory task in which participants had to recognize whether a tune was an exact transposition of another tune recently presented. Participants were musicians and nonmusicians between ages 18 and 30 or 60 and 80. In 4 experiments, the authors found that age and experience affected different aspects of the task, with experience becoming more influential when interference was provided during the task. Age and experience interacted only weakly, and neither age nor experience influenced the superiority of tonal over atonal materials. Recognition memory for the sequences did not reflect the same pattern of results as the transposition task. The implications of these results for theories of aging, experience, and music cognition are discussed.
Resumo:
In this descriptive study, we examined the influences and experiences motivating students to enter college-level music schools as reported by a population of precollegiate students auditioning (but not yet accepted) to music education degree programs. As a follow-up to a published pilot study, this research was designed to quantify the various experiences respondents had as part of their precollege school and community programs that related to teaching and music. Results indicate a strong connection between respondents’ primary musical background and future teaching interest. The top three influential experiences were related to high school ensemble membership (band, choir, orchestra), and the most influential group of individuals in the decision to become a music educator were high school ensemble directors. Respondents from all four primary background groups (band, choir, orchestra, and general or other) rated private lesson teaching as their second strongest future teaching interest, just behind teaching at the high school level in their primary background. Respondents rated parents as moderately influential on their desire to become a music teacher.
Resumo:
During the past two decades, chiral capillary electrophoresis (CE) emerged as a promising, effective and economic approach for the enantioselective determination of drugs and their metabolites in body fluids, tissues and in vitro preparations. This review discusses the principles and important aspects of CE-based chiral bioassays, provides a survey of the assays developed during the past 10 years and presents an overview of the key achievements encountered in that time period. Applications discussed encompass the pharmacokinetics of drug enantiomers in vivo and in vitro, the elucidation of the stereoselectivity of drug metabolism in vivo and in vitro, and bioanalysis of drug enantiomers of toxicological, forensic and doping interest. Chiral CE was extensively employed for research purposes to investigate the stereoselectivity associated with hydroxylation, dealkylation, carboxylation, sulfoxidation, N-oxidation and ketoreduction of drugs and metabolites. Enantioselective CE played a pivotal role in many biomedical studies, thereby providing new insights into the stereoselective metabolism of drugs in different species which might eventually lead to new strategies for optimization of pharmacotherapy in clinical practice.
Resumo:
Mapping the relevant principles and norms of international law, the paper discusses scientific evidence and identifies current legal foundations of climate change mitigation adaptation and communication in international environmental law, human rights protection and international trade regulation in WTO law. It briefly discusses the evolution and architecture of relevant multilateral environmental agreements, in particular the UN Framework Convention on Climate Change. It discusses the potential role of human rights in identifying pertinent goals and values of mitigation and adaptation and eventually turns to principles and rules of international trade regulation and investment protection which are likely to be of crucial importance should the advent of a new multilateral agreement fail to materialize. The economic and legal relevance of rules on tariffs, border tax adjustment and subsidies, services and intellectual property and investment law are discussed in relation to the production, supply and use of energy. Moreover, lessons from trade negotiations may be drawn for negotiations of future environmental instruments. The paper offers a survey of the main interacting areas of public international law and discusses the intricate interaction of all these components informing climate change mitigation, adaptation and communication in international law in light of an emerging doctrine of multilayered governance. It seeks to contribute to greater coherence of what today is highly fragmented and rarely discussed in an overall context. The paper argues that trade regulation will be of critical importance in assessing domestic policies and potential trade remedies offer powerful incentives for all nations alike to participate in a multilateral framework defining appropriate goals and principles.
Resumo:
Principles and guidelines are presented to ensure a solid scientific standard of papers dealing with the taxonomy of taxa of Pasteurellaceae Pohl 1981. The classification of the Pasteurellaceae is in principle based on a polyphasic approach. DNA sequencing of certain genes is very important for defining the borders of a taxon. However, the characteristics that are common to all members of the taxon and which might be helpful for separating it from related taxa must also be identified. Descriptions have to be based on as many strains as possible (inclusion of at least five strains is highly desirable), representing different sources with respect to geography and ecology, to allow proper characterization both phenotypically and genotypically, to establish the extent of diversity of the cluster to be named. A genus must be monophyletic based on 16S rRNA gene sequence-based phylogenetic analysis. Only in very rare cases is it acceptable that monophyly can not be achieved by 16S rRNA gene sequence comparison. Recently, the monophyly of genera has been confirmed by sequence comparison of housekeeping genes. In principle, a new genus should be recognized by a distinct phenotype, and characters that separate the new genus from its neighbours should be given clearly. Due to the overall importance of accurate classification of species, at least two genotypic methods are needed to show coherence and for separation at the species level. The main criterion for the classification of a novel species is that it forms a monophyletic group based on 16S rRNA gene sequence-based phylogenetic analysis. However, some groups might also include closely related species. In these cases, more sensitive tools for genetic recognition of species should be applied, such as DNA-DNA hybridizations. The comparison of housekeeping gene sequences has recently been used for genotypic definition of species. In order to separate species, phenotypic characters must also be identified to recognize them, and at least two phenotypic differences from existing species should be identified if possible. We recommend the use of the subspecies category only for subgroups associated with disease or similar biological characteristics. At the subspecies level, the genotypic groups must always be nested within the boundaries of an existing species. Phenotypic cohesion must be documented at the subspecies level and separation between subspecies and related species must be fully documented, as well as association with particular disease and host. An overview of methods previously used to characterize isolates of the Pasteurellaceae has been given. Genotypic and phenotypic methods are separated in relation to tests for investigating diversity and cohesion and to separate taxa at the level of genus as well as species and subspecies.