907 resultados para Conservative
Resumo:
The sonata began to lose its position of predominance among compositions in the middle of the 19th century. Having been the platform for harmonic and thematic development of music since the late baroque period the sonata entered a process of reevaluation and experimentation with form. As a result fewer sonatas were being composed with some composers dropping the genre completely. This dissertation looks at the different approaches taken by the German, French and Russian schools of composition and compares the solo and chamber music applications of the sonata form. In the German tradition Franz Liszt's Sonata in b minor sets the standard for the revolutionary approach to form while the Berg Sonata is a very conservative application of form to an innovative use of extended chromaticism. Both composers chose to write one movement through composed pieces with Liszt working with a very expansive use of form and Berg being extremely compact and efficient. Among the Russian composers, Prokofieff's third sonata is also a one movement sonata, but he falls between Liszt and Berg in terms of the length of the piece and the use of innovative musical language. Scriabin uses a two movement approach, but keeps the element of a through composed piece with the same important material spanning both movements. Stravinsky is the most conservative of these with a three movement sonata that uses a mix of chromaticism and baroque and classical style influences. The French almost stopped composing true sonatas except for chamber music where Franck and Fauré write late romantic sonatas, while Debussy is very innovative within a three movement sonata. Estampes, by Debussy, are taken in almost as an afterthought to illustrate the direction Debussy takes in his piano solo music. While Estampes is by definition a set of character pieces they function like a sonata with three movements.
Resumo:
This paper focuses on the nature of jamming, as seen in two-dimensional frictional granular systems consisting of photoelastic particles. The photoelastic technique is unique at this time, in its capability to provide detailed particle-scale information on forces and kinematic quantities such as particle displacements and rotations. These experiments first explore isotropic stress states near point J through measurements of the mean contact number per particle, Z, and the pressure, P as functions of the packing fraction, . In this case, the experiments show some but not all aspects of jamming, as expected on the basis of simulations and models that typically assume conservative, hence frictionless, forces between particles. Specifically, there is a rapid growth in Z, at a reasonable which we identify with as c. It is possible to fit Z and P, to power law expressions in - c above c, and to obtain exponents that are in agreement with simulations and models. However, the experiments differ from theory on several points, as typified by the rounding that is observed in Z and P near c. The application of shear to these same 2D granular systems leads to phenomena that are qualitatively different from the standard picture of jamming. In particular, there is a range of packing fractions below c, where the application of shear strain at constant leads to jammed stress-anisotropic states, i.e. they have a non-zero shear stress, τ. The application of shear strain to an initially isotropically compressed (hence jammed) state, does not lead to an unjammed state per se. Rather, shear strain at constant first leads to an increase of both τ and P. Additional strain leads to a succession of jammed states interspersed with relatively localized failures of the force network leading to other stress-anisotropic states that are jammed at typically somewhat lower stress. The locus of jammed states requires a state space that involves not only and τ, but also P. P, τ, and Z are all hysteretic functions of shear strain for fixed . However, we find that both P and τ are roughly linear functions of Z for strains large enough to jam the system. This implies that these shear-jammed states satisfy a Coulomb like-relation, τ = μP. © 2010 The Royal Society of Chemistry.
Resumo:
Like human immunodeficiency virus type 1 (HIV-1), simian immunodeficiency virus of chimpanzees (SIVcpz) can cause CD4+ T cell loss and premature death. Here, we used molecular surveillance tools and mathematical modeling to estimate the impact of SIVcpz infection on chimpanzee population dynamics. Habituated (Mitumba and Kasekela) and non-habituated (Kalande) chimpanzees were studied in Gombe National Park, Tanzania. Ape population sizes were determined from demographic records (Mitumba and Kasekela) or individual sightings and genotyping (Kalande), while SIVcpz prevalence rates were monitored using non-invasive methods. Between 2002-2009, the Mitumba and Kasekela communities experienced mean annual growth rates of 1.9% and 2.4%, respectively, while Kalande chimpanzees suffered a significant decline, with a mean growth rate of -6.5% to -7.4%, depending on population estimates. A rapid decline in Kalande was first noted in the 1990s and originally attributed to poaching and reduced food sources. However, between 2002-2009, we found a mean SIVcpz prevalence in Kalande of 46.1%, which was almost four times higher than the prevalence in Mitumba (12.7%) and Kasekela (12.1%). To explore whether SIVcpz contributed to the Kalande decline, we used empirically determined SIVcpz transmission probabilities as well as chimpanzee mortality, mating and migration data to model the effect of viral pathogenicity on chimpanzee population growth. Deterministic calculations indicated that a prevalence of greater than 3.4% would result in negative growth and eventual population extinction, even using conservative mortality estimates. However, stochastic models revealed that in representative populations, SIVcpz, and not its host species, frequently went extinct. High SIVcpz transmission probability and excess mortality reduced population persistence, while intercommunity migration often rescued infected communities, even when immigrating females had a chance of being SIVcpz infected. Together, these results suggest that the decline of the Kalande community was caused, at least in part, by high levels of SIVcpz infection. However, population extinction is not an inevitable consequence of SIVcpz infection, but depends on additional variables, such as migration, that promote survival. These findings are consistent with the uneven distribution of SIVcpz throughout central Africa and explain how chimpanzees in Gombe and elsewhere can be at equipoise with this pathogen.
Resumo:
Regions of the hamster alpha 1-adrenergic receptor (alpha 1 AR) that are important in GTP-binding protein (G protein)-mediated activation of phospholipase C were determined by studying the biological functions of mutant receptors constructed by recombinant DNA techniques. A chimeric receptor consisting of the beta 2-adrenergic receptor (beta 2AR) into which the putative third cytoplasmic loop of the alpha 1AR had been placed activated phosphatidylinositol metabolism as effectively as the native alpha 1AR, as did a truncated alpha 1AR lacking the last 47 residues in its cytoplasmic tail. Substitutions of beta 2AR amino acid sequence in the intermediate portions of the third cytoplasmic loop of the alpha 1AR or at the N-terminal portion of the cytoplasmic tail caused marked decreases in receptor coupling to phospholipase C. Conservative substitutions of two residues in the C terminus of the third cytoplasmic loop (Ala293----Leu, Lys290----His) increased the potency of agonists for stimulating phosphatidylinositol metabolism by up to 2 orders of magnitude. These data indicate (i) that the regions of the alpha 1AR that determine coupling to phosphatidylinositol metabolism are similar to those previously shown to be involved in coupling of beta 2AR to adenylate cyclase stimulation and (ii) that point mutations of a G-protein-coupled receptor can cause remarkable increases in sensitivity of biological response.
Resumo:
Geospatial modeling is one of the most powerful tools available to conservation biologists for estimating current species ranges of Earth's biodiversity. Now, with the advantage of predictive climate models, these methods can be deployed for understanding future impacts on threatened biota. Here, we employ predictive modeling under a conservative estimate of future climate change to examine impacts on the future abundance and geographic distributions of Malagasy lemurs. Using distribution data from the primary literature, we employed ensemble species distribution models and geospatial analyses to predict future changes in species distributions. Current species distribution models (SDMs) were created within the BIOMOD2 framework that capitalizes on ten widely used modeling techniques. Future and current SDMs were then subtracted from each other, and areas of contraction, expansion, and stability were calculated. Model overprediction is a common issue associated Malagasy taxa. Accordingly, we introduce novel methods for incorporating biological data on dispersal potential to better inform the selection of pseudo-absence points. This study predicts that 60% of the 57 species examined will experience a considerable range of reductions in the next seventy years entirely due to future climate change. Of these species, range sizes are predicted to decrease by an average of 59.6%. Nine lemur species (16%) are predicted to expand their ranges, and 13 species (22.8%) distribution sizes were predicted to be stable through time. Species ranges will experience severe shifts, typically contractions, and for the majority of lemur species, geographic distributions will be considerably altered. We identify three areas in dire need of protection, concluding that strategically managed forest corridors must be a key component of lemur and other biodiversity conservation strategies. This recommendation is all the more urgent given that the results presented here do not take into account patterns of ongoing habitat destruction relating to human activities.
Resumo:
The main conclusion of this dissertation is that global H2 production within young ocean crust (<10 Mya) is higher than currently recognized, in part because current estimates of H2 production accompanying the serpentinization of peridotite may be too low (Chapter 2) and in part because a number of abiogenic H2-producing processes have heretofore gone unquantified (Chapter 3). The importance of free H2 to a range of geochemical processes makes the quantitative understanding of H2 production advanced in this dissertation pertinent to an array of open research questions across the geosciences (e.g. the origin and evolution of life and the oxidation of the Earth’s atmosphere and oceans).
The first component of this dissertation (Chapter 2) examines H2 produced within young ocean crust [e.g. near the mid-ocean ridge (MOR)] by serpentinization. In the presence of water, olivine-rich rocks (peridotites) undergo serpentinization (hydration) at temperatures of up to ~500°C but only produce H2 at temperatures up to ~350°C. A simple analytical model is presented that mechanistically ties the process to seafloor spreading and explicitly accounts for the importance of temperature in H2 formation. The model suggests that H2 production increases with the rate of seafloor spreading and the net thickness of serpentinized peridotite (S-P) in a column of lithosphere. The model is applied globally to the MOR using conservative estimates for the net thickness of lithospheric S-P, our least certain model input. Despite the large uncertainties surrounding the amount of serpentinized peridotite within oceanic crust, conservative model parameters suggest a magnitude of H2 production (~1012 moles H2/y) that is larger than the most widely cited previous estimates (~1011 although previous estimates range from 1010-1012 moles H2/y). Certain model relationships are also consistent with what has been established through field studies, for example that the highest H2 fluxes (moles H2/km2 seafloor) are produced near slower-spreading ridges (<20 mm/y). Other modeled relationships are new and represent testable predictions. Principal among these is that about half of the H2 produced globally is produced off-axis beneath faster-spreading seafloor (>20 mm/y), a region where only one measurement of H2 has been made thus far and is ripe for future investigation.
In the second part of this dissertation (Chapter 3), I construct the first budget for free H2 in young ocean crust that quantifies and compares all currently recognized H2 sources and H2 sinks. First global estimates of budget components are proposed in instances where previous estimate(s) could not be located provided that the literature on that specific budget component was not too sparse to do so. Results suggest that the nine known H2 sources, listed in order of quantitative importance, are: Crystallization (6x1012 moles H2/y or 61% of total H2 production), serpentinization (2x1012 moles H2/y or 21%), magmatic degassing (7x1011 moles H2/y or 7%), lava-seawater interaction (5x1011 moles H2/y or 5%), low-temperature alteration of basalt (5x1011 moles H2/y or 5%), high-temperature alteration of basalt (3x1010 moles H2/y or <1%), catalysis (3x108 moles H2/y or <<1%), radiolysis (2x108 moles H2/y or <<1%), and pyrite formation (3x106 moles H2/y or <<1%). Next we consider two well-known H2 sinks, H2 lost to the ocean and H2 occluded within rock minerals, and our analysis suggests that both are of similar size (both are 6x1011 moles H2/y). Budgeting results suggest a large difference between H2 sources (total production = 1x1013 moles H2/y) and H2 sinks (total losses = 1x1011 moles H2/y). Assuming this large difference represents H2 consumed by microbes (total consumption = 9x1011 moles H2/y), we explore rates of primary production by the chemosynthetic, sub-seafloor biosphere. Although the numbers presented require further examination and future modifications, the analysis suggests that the sub-seafloor H2 budget is similar to the sub-seafloor CH4 budget in the sense that globally significant quantities of both of these reduced gases are produced beneath the seafloor but never escape the seafloor due to microbial consumption.
The third and final component of this dissertation (Chapter 4) explores the self-organization of barchan sand dune fields. In nature, barchan dunes typically exist as members of larger dune fields that display striking, enigmatic structures that cannot be readily explained by examining the dynamics at the scale of single dunes, or by appealing to patterns in external forcing. To explore the possibility that observed structures emerge spontaneously as a collective result of many dunes interacting with each other, we built a numerical model that treats barchans as discrete entities that interact with one another according to simplified rules derived from theoretical and numerical work, and from field observations: Dunes exchange sand through the fluxes that leak from the downwind side of each dune and are captured on their upstream sides; when dunes become sufficiently large, small dunes are born on their downwind sides (“calving”); and when dunes collide directly enough, they merge. Results show that these relatively simple interactions provide potential explanations for a range of field-scale phenomena including isolated patches of dunes and heterogeneous arrangements of similarly sized dunes in denser fields. The results also suggest that (1) dune field characteristics depend on the sand flux fed into the upwind boundary, although (2) moving downwind, the system approaches a common attracting state in which the memory of the upwind conditions vanishes. This work supports the hypothesis that calving exerts a first order control on field-scale phenomena; it prevents individual dunes from growing without bound, as single-dune analyses suggest, and allows the formation of roughly realistic, persistent dune field patterns.
Resumo:
During the 19th century, Frédéric Chopin (1810-1849), Franz Liszt (1811- 1886), and Johannes Brahms (1833-1897) were among the most recognized composers of character pieces. Their compositions have been considered a significant milestone in piano literature. Frédéric Chopin (1810-1849) did not give descriptive titles to his character pieces. He grouped them into several genres such as Mazurkas, Polonaises. His Mazurkas and Polonaises are influenced by Polish dance music and inspired by the polish national idiom. Franz Liszt (1811-1886) was influenced in many ways by Chopin, and adopted Chopin’s lyricism, melodic style, and tempo rubato. However, Liszt frequently drew on non-musical subjects (e.g., art, literature) for inspiration. “Harmonies poétiques et religieuses” and “Années de pèlerinage” are especially representative of character pieces in which poetic and pictorial imagination are reflected. Johannes Brahms (1833-1897) was a conservative traditionalist, synthesizing Romantic expression and Classical tradition remarkably well. Like Chopin, Brahms avoided using programmatic titles for his works. The titles of Brahms’ short character pieces are often taken from traditional lyrical or dramatic genres such as ballade, rhapsody and scherzo. Because of his conservatism, Brahms was considered the main rival of Liszt in the Romantic Period. Brahms character pieces in his third period (e.g., Scherzo Op.4, Ballades of Op.10, and Rhapsodies of Op.79) are concise and focused. The form of Brahms’ character pieces is mostly simple ternary (ABA), and his style is introspective and lyrical. Through this recording project, I was able to get a better understanding of the styles of Chopin, Brahms and Liszt through their character pieces. This recording dissertation consists of two CDs recorded in the Dekelboum Concert Hall at the University of Maryland, College Park. These recordings are documented on compact disc recordings that are housed within the University of Maryland Library System.
Resumo:
Adolescence is often viewed as a time of irrational, risky decision-making - despite adolescents' competence in other cognitive domains. In this study, we examined the strategies used by adolescents (N=30) and young adults (N=47) to resolve complex, multi-outcome economic gambles. Compared to adults, adolescents were more likely to make conservative, loss-minimizing choices consistent with economic models. Eye-tracking data showed that prior to decisions, adolescents acquired more information in a more thorough manner; that is, they engaged in a more analytic processing strategy indicative of trade-offs between decision variables. In contrast, young adults' decisions were more consistent with heuristics that simplified the decision problem, at the expense of analytic precision. Collectively, these results demonstrate a counter-intuitive developmental transition in economic decision making: adolescents' decisions are more consistent with rational-choice models, while young adults more readily engage task-appropriate heuristics.
Resumo:
Abstract: New product design challenges, related to customer needs, product usage and environments, face companies when they expand their product offerings to new markets; Some of the main challenges are: the lack of quantifiable information, product experience and field data. Designing reliable products under such challenges requires flexible reliability assessment processes that can capture the variables and parameters affecting the product overall reliability and allow different design scenarios to be assessed. These challenges also suggest a mechanistic (Physics of Failure-PoF) reliability approach would be a suitable framework to be used for reliability assessment. Mechanistic Reliability recognizes the primary factors affecting design reliability. This research views the designed entity as a “system of components required to deliver specific operations”; it addresses the above mentioned challenges by; Firstly: developing a design synthesis that allows a descriptive operations/ system components relationships to be realized; Secondly: developing component’s mathematical damage models that evaluate components Time to Failure (TTF) distributions given: 1) the descriptive design model, 2) customer usage knowledge and 3) design material properties; Lastly: developing a procedure that integrates components’ damage models to assess the mechanical system’s reliability over time. Analytical and numerical simulation models were developed to capture the relationships between operations and components, the mathematical damage models and the assessment of system’s reliability. The process was able to affect the design form during the conceptual design phase by providing stress goals to meet component’s reliability target. The process was able to numerically assess the reliability of a system based on component’s mechanistic TTF distributions, besides affecting the design of the component during the design embodiment phase. The process was used to assess the reliability of an internal combustion engine manifold during design phase; results were compared to reliability field data and found to produce conservative reliability results. The research focused on mechanical systems, affected by independent mechanical failure mechanisms that are influenced by the design process. Assembly and manufacturing stresses and defects’ influences are not a focus of this research.
Resumo:
A singular perturbation method is applied to a non-conservative system of two weakly coupled strongly nonlinear non-identical oscillators. For certain parameters, localized solutions exist for which the amplitude of one oscillator is an order of magnitude smaller than the other. It is shown that these solutions are described by coupled equations for the phase difference and scaled amplitudes. Three types of localized solutions are obtained as solutions to these equations which correspond to phase locking, phase drift, and phase entrainment. Quantitative results for the phases and amplitudes of the oscillators and the stability of these phenomena are expressed in terms of the parameters of the model.
Resumo:
The requirement for a very accurate dependence analysis to underpin software tools to aid the generation of efficient parallel implementations of scalar code is argued. The current status of dependence analysis is shown to be inadequate for the generation of efficient parallel code, causing too many conservative assumptions to be made. This paper summarises the limitations of conventional dependence analysis techniques, and then describes a series of extensions which enable the production of a much more accurate dependence graph. The extensions include analysis of symbolic variables, the development of a symbolic inequality disproof algorithm and its exploitation in a symbolic Banerjee inequality test; the use of inference engine proofs; the exploitation of exact dependence and dependence pre-domination attributes; interprocedural array analysis; conditional variable definition tracing; integer array tracing and division calculations. Analysis case studies on typical numerical code is shown to reduce the total dependencies estimated from conventional analysis by up to 50%. The techniques described in this paper have been embedded within a suite of tools, CAPTools, which combines analysis with user knowledge to produce efficient parallel implementations of numerical mesh based codes.
Resumo:
Data from three forest sites in Sumatra (Batang Ule, Pasirmayang and Tebopandak) have been analysed and compared for the effects of sample area cut-off, and tree diameter cut-off. An 'extended inverted exponential model' is shown to be well suited to fitting tree-species-area curves. The model yields species carrying capacities of 680 for Batang Ule, 380 species for Pasirmayang, and 35 for Tebopandak (tree diameter >10cm). It would seem that in terms of species carrying capacity, Tebopandak and Pasirmayang are rather similar, and both less diverse than the hilly Batang Ule site. In terms of conservation policy, this would mean that rather more emphasis should be put on conserving hilly sites on a granite substratum. For Pasirmayang with tree diameter >3cm, the asymptotic species number estimate is 567, considerably higher than the estimate of 387 species for trees with diameter >10cm. It is clear that the diameter cut-off has a major impact on the estimate of the species carrying capacity. A conservative estimate of the total number of tree species in the Pasirmayang region is 632 species! In sampling exercises, the diameter cut-off should not be chosen lightly, and it may be worth adopting field sampling procedures which involve some subsampling of the primary sample area, where the diameter cut-off is set much lower than in the primary plots.
Resumo:
The key problems in discussing stochastic monotonicity and duality for continuous time Markov chains are to give the criteria for existence and uniqueness and to construct the associated monotone processes in terms of their infinitesimal q -matrices. In their recent paper, Chen and Zhang [6] discussed these problems under the condition that the given q-matrix Q is conservative. The aim of this paper is to generalize their results to a more general case, i.e., the given q-matrix Q is not necessarily conservative. New problems arise 'in removing the conservative assumption. The existence and uniqueness criteria for this general case are given in this paper. Another important problem, the construction of all stochastically monotone Q-processes, is also considered.
Resumo:
The key problems in discussing duality and monotonicity for continuous-time Markov chains are to find conditions for existence and uniqueness and then to construct corresponding processes in terms of infinitesimal characteristics, i.e., q-matrices. Such problems are solved in this paper under the assumption that the given q-matrix is conservative. Some general properties of stochastically monotone Q-process ( Q is not necessarily conservative) are also discussed.
Computational modeling techniques for reliability of electronic components on printed circuit boards
Resumo:
This paper describes modeling technology and its use in providing data governing the assembly and subsequent reliability of electronic chip components on printed circuit boards (PCBs). Products, such as mobile phones, camcorders, intelligent displays, etc., are changing at a tremendous rate where newer technologies are being applied to satisfy the demands for smaller products with increased functionality. At ever decreasing dimensions, and increasing number of input/output connections, the design of these components, in terms of dimensions and materials used, is playing a key role in determining the reliability of the final assembly. Multiphysics modeling techniques are being adopted to predict a range of interacting physics-based phenomena associated with the manufacturing process. For example, heat transfer, solidification, marangoni fluid flow, void movement, and thermal-stress. The modeling techniques used are based on finite volume methods that are conservative and take advantage of being able to represent the physical domain using an unstructured mesh. These techniques are also used to provide data on thermal induced fatigue which is then mapped into product lifetime predictions.