956 resultados para Program performance


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is the result of a master’s program research developed in the post graduation program in music at the Music School of the UFRN ( Federal University of Rio Grande do Norte) under the orientation of Dr. Andre Luiz Muniz de Oliveira which aimed at making a reflexion about regency gestures and its implications about the objective and subjective elements of the performance connected to a number of regency tools concieved in accordance with the tradition of historic music. As a tool for gestual analysis we’ve used the Harold Farbermann PatternCube method. We’ve used videos from conductors Pierre Boulez e Valery Gergiev, both conducting Igor Stravinsky’s Rite of Spring. In the analysis of the videos we’ve observed the technical use of the gestual aparatus instead of the use of musical gestures fundamented in Hatten. The research showed us the importance of the use of analitical tools in helping subsidise a direction in performance in regency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examined the intergenerational effects of parental conviction of a substance-related charge on children's academic performance and, conditional on a conviction, whether completion of an adult drug treatment court (DTC) program was associated with improved school performance. State administrative data from North Carolina courts, birth records, and school records were linked for 2005-2012. Math and reading end-of-grade test scores and absenteeism were examined for 5 groups of children, those with parents who: were not convicted on any criminal charge, were convicted on a substance-related charge and not referred by a court to a DTC, were referred to a DTC but did not enroll, enrolled in a DTC but did not complete, and completed a DTC program. Accounting for demographic and socioeconomic factors, the school performance of children whose parents were convicted of a substance-related offense was worse than that of children whose parents were not convicted on any charge. These differences were statistically significant but substantially reduced after controlling for socioeconomic characteristics; for example, mother's educational attainment. We found no evidence that parent participation in an adult DTC program led to improved school performance of their children. While the children of convicted parents fared worse on average, much--but not all--of this difference was attributed to socioeconomic factors, with the result that parental conviction remained a risk factor for poorer school performance. Even though adult DTCs have been shown to have other benefits, we could detect no intergenerational benefit in improved school performance of their children.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].

Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.

As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.

More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.

With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.

Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.

With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.

Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.

Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As an alternative to transverse spiral or hoop steel reinforcement, fiber reinforced polymers (FRPs) were introduced to the construction industry in the 1980's. The concept of concrete-filled FRP tube (CFFT) has raised great interest amongst researchers in the last decade. FRP tube can act as a pour form, protective jacket, and shear and flexural reinforcement for concrete. However, seismic performance of CFFT bridge substructure has not yet been fully investigated. Experimental work in this study included four two-column bent tests, several component tests and coupon tests. Four 1/6-scale bridge pier frames, consisting of a control reinforced concrete frame (RCF), glass FRP-concrete frame (GFF), carbon FRP-concrete frame (CFF), and hybrid glass/carbon FRP-concrete frame (HFF) were tested under reverse cyclic lateral loading with constant axial loads. Specimen GFF did not show any sign of cracking at a drift ratio as high as 15% with considerable loading capacity, whereas Specimen CFF showed that lowest ductility with similar load capacity as in Specimen GFF. FRP-concrete columns and pier cap beams were then cut from the pier frame specimens, and were tested again in three point flexure under monotonic loading with no axial load. The tests indicated that bonding between FRP and concrete and yielding of steel both affect the flexural strength and ductility of the components. The coupon tests were carried out to establish the tensile strength and elastic modulus of each FRP tube and the FRP mold for the pier cap beam in the two principle directions of loading. A nonlinear analytical model was developed to predict the load-deflection responses of the pier frames. The model was validated against test results. Subsequently, a parametric study was conducted with variables such as frame height to span ratio, steel reinforcement ratio, FRP tube thickness, axial force, and compressive strength of concrete. A typical bridge was also simulated under three different ground acceleration records and damping ratios. Based on the analytical damage index, the RCF bridge was most severely damaged, whereas the GFF bridge only suffered minor repairable damages. Damping ratio was shown to have a pronounced effect on FRP-concrete bridges, just the same as in conventional bridges. This research was part of a multi-university project, which is founded by the National Science Foundation (NSF) Network for Earthquake Engineering Simulation Research (NEESR) program.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sprint interval training (SIT) can elicit improvements in aerobic and anaerobic capacity. While variations in SIT protocols have been investigated, the influence of social processes cannot be overlooked. As research supports the use of groups to influence individual cognitions and behaviours, the current project assessed the effectiveness of a group-based intervention with participants conducting SIT. Specifically, 53 amateur athletes (age, 21.9 ± 2.9 years; 53% females) took part in a 4-week training program (3 sessions per week, 30-s “all-out” efforts with 4 min active recovery, repeated 4–6 times per session), and were assigned to “true group”, aggregate, or individual conditions. Results indicated no significant differences between groups for the physiological measures. With regards to training improvements from baseline for all participants— regardless of condition — significant main effects for time were identified for maximal oxygen uptake (2.5–2.8 mL·kg−1·min−1, p < 0.001, η2 = 0.03), time-trial performance (14–32 s, p < 0.001, η2 = 0.37), and anaerobic power (1.1–1.7 k·h−1, p < 0.001, η2 = 0.66). With regards to the psychological measures, significant main effects between groups were found for motivation (p = 0.033, η2 = 0.13), task self-efficacy (p = 0.018, η2 = 0.15), and scheduling self-efficacy (p = 0.003, η2 = 0.22). The true group experienced greater improvements in motivation than the individual condition, but the aggregate and individual conditions demonstrated greater increases in task and scheduling self-efficacy. Though the SIT paradigm employed induced training improvements similar to previous work, the group intervention was not able to further these improvements

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The presentation made at the conference addressed the issue of linkages between performance information and innovation within the Canadian federal government1. This is a three‐part paper prepared as background to that presentation. • Part I provides an overview of three main sources of performance information - results-based systems, program evaluation, and centrally driven review exercises – and reviews the Canadian experience with them. • Part II identifies and discusses a number of innovation issues that are common to the literature reviewed for this paper. • Part III examines actual and potential linkages between innovation and performance information. This section suggests that innovation in the Canadian federal government tends to cluster into two groups: smaller initiatives driven by staff or middle management; and much larger projects involving major programs, whole departments or whole-of-government. Readily available data on smaller innovation projects is skimpy but suggests that performance information does not play a major role in stimulating these initiatives. In contrast, two of the examples of large-scale innovation show that performance information plays a critical role at all stages. The paper concludes by supporting the contention of others writing on this topic: that more research is needed on innovation, particularly on its link to performance information. In that context, other conclusions drawn in this paper are tentative but suggest that the quality of performance information is as important for innovation as it is for performance management. However, innovation is likely to require its own particular performance information that may not be generated on a routine basis for purposes of performance management, particularly in the early stages of innovation. And, while the availability of performance information can be an important success factor in innovation, it does not stand alone. The commonality of a number of other factors identified in the literature surveyed for this paper strongly suggests that equal if not greater priority needs to be given to attenuating factors that inhibit innovation and to nurturing incentives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le dimensionnement basé sur la performance (DBP), dans une approche déterministe, caractérise les objectifs de performance par rapport aux niveaux de performance souhaités. Les objectifs de performance sont alors associés à l'état d'endommagement et au niveau de risque sismique établis. Malgré cette approche rationnelle, son application est encore difficile. De ce fait, des outils fiables pour la capture de l'évolution, de la distribution et de la quantification de l'endommagement sont nécessaires. De plus, tous les phénomènes liés à la non-linéarité (matériaux et déformations) doivent également être pris en considération. Ainsi, cette recherche montre comment la mécanique de l'endommagement pourrait contribuer à résoudre cette problématique avec une adaptation de la théorie du champ de compression modifiée et d'autres théories complémentaires. La formulation proposée adaptée pour des charges monotones, cycliques et de type pushover permet de considérer les effets non linéaires liés au cisaillement couplé avec les mécanismes de flexion et de charge axiale. Cette formulation est spécialement appliquée à l'analyse non linéaire des éléments structuraux en béton soumis aux effets de cisaillement non égligeables. Cette nouvelle approche mise en œuvre dans EfiCoS (programme d'éléments finis basé sur la mécanique de l'endommagement), y compris les critères de modélisation, sont également présentés ici. Des calibrations de cette nouvelle approche en comparant les prédictions avec des données expérimentales ont été réalisées pour les murs de refend en béton armé ainsi que pour des poutres et des piliers de pont où les effets de cisaillement doivent être pris en considération. Cette nouvelle version améliorée du logiciel EFiCoS a démontrée être capable d'évaluer avec précision les paramètres associés à la performance globale tels que les déplacements, la résistance du système, les effets liés à la réponse cyclique et la quantification, l'évolution et la distribution de l'endommagement. Des résultats remarquables ont également été obtenus en référence à la détection appropriée des états limites d'ingénierie tels que la fissuration, les déformations unitaires, l'éclatement de l'enrobage, l'écrasement du noyau, la plastification locale des barres d'armature et la dégradation du système, entre autres. Comme un outil pratique d'application du DBP, des relations entre les indices d'endommagement prédits et les niveaux de performance ont été obtenus et exprimés sous forme de graphiques et de tableaux. Ces graphiques ont été développés en fonction du déplacement relatif et de la ductilité de déplacement. Un tableau particulier a été développé pour relier les états limites d'ingénierie, l'endommagement, le déplacement relatif et les niveaux de performance traditionnels. Les résultats ont démontré une excellente correspondance avec les données expérimentales, faisant de la formulation proposée et de la nouvelle version d'EfiCoS des outils puissants pour l'application de la méthodologie du DBP, dans une approche déterministe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concepts of light shelves consist of windows that have face towards the sun, which receive a vast quantity of energy that could be used for healthy day lighting. This paper debates a main assessment, investigates the optimization of daylight requirement by means of light shelves system. An experimental test was carried out assessing the measurements and lighting simulations of a model of a building in order to elucidate the characteristics of indoor lighting. Light shelf is an architectural element that permits daylight to enter deep into a building. It constitutes an optimal solution for an incorrect building orientation and less sunny days. The essential objective of this study is to highlight the vital role of light shelves in residential buildings in northern Europa where the requirement is to improve the daylight in the interior functional spaces. The main objects of this paper are to investigate the effect of daylight in the interior functional spaces using light shelves, the effect of natural light diffusion in interior space in the period of low daylight season, and glare effect in this field. This paper investigates a procedure for analysing the daylight performance using software habitat function

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La performance d’un produit de finition sur le bois est influencée par la manière dont la surface est préparée. Le ponçage est très utilisé pour préparer les surfaces lors de la finition. Toutefois, ce procédé génère une grande quantité de poussières. Ainsi, les effets des procédés d’usinage sur les propriétés de surface, la performance d’un vernis et l’émission de poussières ont été étudiés dans le but de déterminer les modes de préparation des surfaces les plus adéquats pour le bois de chêne rouge. Dans un premier volet, les propriétés de surface et la performance d’un vernis ont été évaluées sur les surfaces préparées à l’aide du procédé traditionnel de ponçage et de trois procédés alternatifs de rabotage soit la coupe périphérique droite, la coupe hélicoïdale et la coupe oblique. La qualité de surface a été évaluée au moyen des caractéristiques de rugosité, d’endommagement cellulaire et de mouillabilité. Des essais de résistance à l’adhésion d’un vernis d’usage intérieur ont été effectués avant et après un traitement de vieillissement accéléré. Les résultats ont montré que le ponçage a induit une rugosité et un niveau de fibrillation supérieurs à ceux des autres procédés, ainsi qu’une mouillabilité et une adhésion du vernis après vieillissement accéléré élevées. Les surfaces rabotées avec la coupe périphérique droite ont présenté un certain niveau de fibrillation, une rugosité et une mouillabilité intermédiaires. Néanmoins, l’adhésion du vernis après vieillissement a été également inférieure par rapport aux autres procédés. La coupe hélicoïdale a produit une rugosité intermédiaire. D’autre part, la coupe oblique a été le procédé qui a présenté une perte d’adhésion après vieillissement similaire au ponçage. Ce procédé a généré des surfaces lisses avec rugosité et mouillabilité intermédiaires. Sur la base des résultats obtenus, le ponçage à l’aide d’un programme P100-grain et une vitesse d’avance de 7 m/min, la coupe périphérique droite avec un angle d’attaque de 25° et une onde d’usinage de 1,0 mm, la coupe hélicoïdale avec une onde d’usinage de 1,0 mm et la coupe oblique realisé avec un angle oblique de 15° ont permis d’obtenir les meilleures conditions d’usinage pour chaque procédé. Dans un deuxième volet, l’effet de différents paramètres de coupe sur l’émission de poussières et la rugosité de la surface a été étudié lors de la coupe hélicoïdale. Les émissions de poussières ont diminué avec la diminution de laprofondeur de coupe et l’augmentation de l’épaisseur moyenne du copeau. Cependant, les surfaces obtenues avec l’épaisseur moyenne du copeau plus élevée ont présenté une rugosité supérieure. Par contre, si une surface plus lisse est requise, une vitesse d’avance intermédiaire doit être utilisée afin de diminuer la rugosité des surfaces sans exposer les travailleurs à des niveaux élevés de poussière de bois. Par ailleurs, l’émission de poussières pour chaque fraction de particules peut être estimée à travers les modèles développés.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Processors with large numbers of cores are becoming commonplace. In order to utilise the available resources in such systems, the programming paradigm has to move towards increased parallelism. However, increased parallelism does not necessarily lead to better performance. Parallel programming models have to provide not only flexible ways of defining parallel tasks, but also efficient methods to manage the created tasks. Moreover, in a general-purpose system, applications residing in the system compete for the shared resources. Thread and task scheduling in such a multiprogrammed multithreaded environment is a significant challenge. In this thesis, we introduce a new task-based parallel reduction model, called the Glasgow Parallel Reduction Machine (GPRM). Our main objective is to provide high performance while maintaining ease of programming. GPRM supports native parallelism; it provides a modular way of expressing parallel tasks and the communication patterns between them. Compiling a GPRM program results in an Intermediate Representation (IR) containing useful information about tasks, their dependencies, as well as the initial mapping information. This compile-time information helps reduce the overhead of runtime task scheduling and is key to high performance. Generally speaking, the granularity and the number of tasks are major factors in achieving high performance. These factors are even more important in the case of GPRM, as it is highly dependent on tasks, rather than threads. We use three basic benchmarks to provide a detailed comparison of GPRM with Intel OpenMP, Cilk Plus, and Threading Building Blocks (TBB) on the Intel Xeon Phi, and with GNU OpenMP on the Tilera TILEPro64. GPRM shows superior performance in almost all cases, only by controlling the number of tasks. GPRM also provides a low-overhead mechanism, called “Global Sharing”, which improves performance in multiprogramming situations. We use OpenMP, as the most popular model for shared-memory parallel programming as the main GPRM competitor for solving three well-known problems on both platforms: LU factorisation of Sparse Matrices, Image Convolution, and Linked List Processing. We focus on proposing solutions that best fit into the GPRM’s model of execution. GPRM outperforms OpenMP in all cases on the TILEPro64. On the Xeon Phi, our solution for the LU Factorisation results in notable performance improvement for sparse matrices with large numbers of small blocks. We investigate the overhead of GPRM’s task creation and distribution for very short computations using the Image Convolution benchmark. We show that this overhead can be mitigated by combining smaller tasks into larger ones. As a result, GPRM can outperform OpenMP for convolving large 2D matrices on the Xeon Phi. Finally, we demonstrate that our parallel worksharing construct provides an efficient solution for Linked List processing and performs better than OpenMP implementations on the Xeon Phi. The results are very promising, as they verify that our parallel programming framework for manycore processors is flexible and scalable, and can provide high performance without sacrificing productivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A key driver of Australian sweetpotato productivity improvements and consumer demand has been industry adoption of disease-free planting material systems. On a farm isolated from main Australian sweetpotato areas, virus-free germplasm is annually multiplied, with subsequent 'pathogen-tested' (PT) sweetpotato roots shipped to commercial Australian sweetpotato growers. They in turn plant their PT roots into specially designated plant beds, commencing in late winter. From these beds, they cut sprouts as the basis for their commercial fields. Along with other intense agronomic practices, this system enables Australian producers to achieve worldRSQUOs highest commercial yields (per hectare) of premium sweetpotatoes. Their industry organisation, ASPG (Australian Sweetpotato Growers Inc.), has identified productivity of mother plant beds as a key driver of crop performance. Growers and scientists are currently collaborating to investigate issues such as catastrophic plant beds losses; optimisation of irrigation and nutrient addition; rapidity and uniformity of initial plant bed harvests; optimal plant bed harvest techniques; virus re-infection of plant beds; and practical longevity of plant beds. A survey of 50 sweetpotato growers in Queensland and New South Wales identified a substantial diversity in current plant bed systems, apparently influenced by growing district, scale of operation, time of planting, and machinery/labour availability. Growers identified key areas for plant bed research as: optimising the size and grading specifications of PT roots supplied for the plant beds; change in sprout density, vigour and performance through sequential cuttings of the plant bed; optimal height above ground level to cut sprouts to maximise commercial crop and plant bed performance; and use of structures and soil amendments in plant bed systems. Our ongoing multi-disciplinary research program integrates detailed agronomic experiments, grower adaptive learning sites, product quality and consumer research, to enhance industry capacity for inspired innovation and commercial, sustainable practice change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fundamental step in understanding the effects of irradiation on metallic uranium and uranium dioxide ceramic fuels, or any material, must start with the nature of radiation damage on the atomic level. The atomic damage displacement results in a multitude of defects that influence the fuel performance. Nuclear reactions are coupled, in that changing one variable will alter others through feedback. In the field of fuel performance modeling, these difficulties are addressed through the use of empirical models rather than models based on first principles. Empirical models can be used as a predictive code through the careful manipulation of input variables for the limited circumstances that are closely tied to the data used to create the model. While empirical models are efficient and give acceptable results, these results are only applicable within the range of the existing data. This narrow window prevents modeling changes in operating conditions that would invalidate the model as the new operating conditions would not be within the calibration data set. This work is part of a larger effort to correct for this modeling deficiency. Uranium dioxide and metallic uranium fuels are analyzed through a kinetic Monte Carlo code (kMC) as part of an overall effort to generate a stochastic and predictive fuel code. The kMC investigations include sensitivity analysis of point defect concentrations, thermal gradients implemented through a temperature variation mesh-grid, and migration energy values. In this work, fission damage is primarily represented through defects on the oxygen anion sublattice. Results were also compared between the various models. Past studies of kMC point defect migration have not adequately addressed non-standard migration events such as clustering and dissociation of vacancies. As such, the General Utility Lattice Program (GULP) code was utilized to generate new migration energies so that additional non-migration events could be included into kMC code in the future for more comprehensive studies. Defect energies were calculated to generate barrier heights for single vacancy migration, clustering and dissociation of two vacancies, and vacancy migration while under the influence of both an additional oxygen and uranium vacancy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this dissertation is to produce a new Harmonie arrangement of Mozart’s Die Zauberflöte suitable for modern performance, bringing Joseph Heidenreich’s 1782 arrangement—one of the great treasures of the wind repertoire—to life for future performers and audiences. I took advantage of the capabilities of modern wind instruments and performance techniques, and employed other instruments normally found in the modern wind ensemble to create a work in the tradition of Heidenreich’s that restored as much of Mozart’s original thinking as possible. I expanded the Harmonie band to include flute and string bass. Other instruments provide special effects, a traditional role for wind instruments in the Classical opera orchestra. This arrangement is conceived to be performed with the original vocal soloists, making it a viable option for concert performance or for smaller staged productions. It is also intended to allow the wind players to be onstage with the singers, becoming part of the dramatic action while simultaneously serving as the “opera orchestra.” This allows creative staging possibilities, and offers the wind players an opportunity to explore new aspects of performing. My arrangement also restores Mozart’s music to its original keys and retains much of his original wind scoring. This arrangement expands the possibilities for collaboration between opera studios, voice departments or community opera companies and wind ensembles. A suite for winds without voices (currently in production) will allow conductors to program this major work from the Classical era without dedicating a concert program to the complete opera. Excerpted arias and duets from this arrangement provide vocalists the option of using chamber wind accompaniment on recitals. The door is now open to arrangements of other operas by composers such as Mozart, Rossini and Weber, adding new repertoire for chamber winds and bringing great music to life in a new way.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To estimate the prevalence and factors associated with the performance of mammography and pap smear test in women from the city of Maringá, Paraná. Methods: Population-based cross-sectional study conducted with 345 women aged over 20 years in the period from March 2011 to April 2012. An interview was carried out using a questionnaire proposed by the Ministry of Health, which addressed sociodemographic characteristics, risk factors for chronic noncommunicable diseases and issues related to mammographic and pap screening. Data were analyzed using bivariate analysis, crude analysis with odds ratio (OR) and chi-squared test using Epi Info 3.5.1 program; multivariate analysis using logistic regression was performed using the software Statistica 7.1, with a significance level of 5% and a confidence interval of 95%. Results: The mean age of the women was 52.19 (±5.27) years. The majority (56.5%) had from 0 to 8 years of education. Additionally, 84.6% (n=266) of the women underwent pap smear and 74.3% (n=169) underwent mammography. The lower performance of pap smear test was associated with women with 9-11 years of education (p=0.01), and the lower performance of mammography was associated with women without private health insurance (p<0.01). Conclusion: The coverage of mammography and pap smear test was satisfactory among the women from Maringá, Paraná. Low education level and women who depended on the public health system presented lower performance of mammography.