991 resultados para ROBOOST SPP solid rocket motor mesh balistica
Resumo:
Recent research on particle size distributions and particle concentrations near a busy road cannot be explained by the conventional mechanisms for particle evolution of combustion aerosols. Specifically they appear to be inadequate to explain the experimental observations of particle transformation and the evolution of the total number concentration. This resulted in the development of a new mechanism based on their thermal fragmentation, for the evolution of combustion aerosol nano-particles. A complex and comprehensive pattern of evolution of combustion aerosols, involving particle fragmentation, was then proposed and justified. In that model it was suggested that thermal fragmentation occurs in aggregates of primary particles each of which contains a solid graphite/carbon core surrounded by volatile molecules bonded to the core by strong covalent bonds. Due to the presence of strong covalent bonds between the core and the volatile (frill) molecules, such primary composite particles can be regarded as solid, despite the presence of significant (possibly, dominant) volatile component. Fragmentation occurs when weak van der Waals forces between such primary particles are overcome by their thermal (Brownian) motion. In this work, the accepted concept of thermal fragmentation is advanced to determine whether fragmentation is likely in liquid composite nano-particles. It has been demonstrated that at least at some stages of evolution, combustion aerosols contain a large number of composite liquid particles containing presumably several components such as water, oil, volatile compounds, and minerals. It is possible that such composite liquid particles may also experience thermal fragmentation and thus contribute to, for example, the evolution of the total number concentration as a function of distance from the source. Therefore, the aim of this project is to examine theoretically the possibility of thermal fragmentation of composite liquid nano-particles consisting of immiscible liquid v components. The specific focus is on ternary systems which include two immiscible liquid droplets surrounded by another medium (e.g., air). The analysis shows that three different structures are possible, the complete encapsulation of one liquid by the other, partial encapsulation of the two liquids in a composite particle, and the two droplets separated from each other. The probability of thermal fragmentation of two coagulated liquid droplets is discussed and examined for different volumes of the immiscible fluids in a composite liquid particle and their surface and interfacial tensions through the determination of the Gibbs free energy difference between the coagulated and fragmented states, and comparison of this energy difference with the typical thermal energy kT. The analysis reveals that fragmentation was found to be much more likely for a partially encapsulated particle than a completely encapsulated particle. In particular, it was found that thermal fragmentation was much more likely when the volume ratio of the two liquid droplets that constitute the composite particle are very different. Conversely, when the two liquid droplets are of similar volumes, the probability of thermal fragmentation is small. It is also demonstrated that the Gibbs free energy difference between the coagulated and fragmented states is not the only important factor determining the probability of thermal fragmentation of composite liquid particles. The second essential factor is the actual structure of the composite particle. It is shown that the probability of thermal fragmentation is also strongly dependent on the distance that each of the liquid droplets should travel to reach the fragmented state. In particular, if this distance is larger than the mean free path for the considered droplets in the air, the probability of thermal fragmentation should be negligible. In particular, it follows form here that fragmentation of the composite particle in the state with complete encapsulation is highly unlikely because of the larger distance that the two droplets must travel in order to separate. The analysis of composite liquid particles with the interfacial parameters that are expected in combustion aerosols demonstrates that thermal fragmentation of these vi particles may occur, and this mechanism may play a role in the evolution of combustion aerosols. Conditions for thermal fragmentation to play a significant role (for aerosol particles other than those from motor vehicle exhaust) are determined and examined theoretically. Conditions for spontaneous transformation between the states of composite particles with complete and partial encapsulation are also examined, demonstrating the possibility of such transformation in combustion aerosols. Indeed it was shown that for some typical components found in aerosols that transformation could take place on time scales less than 20 s. The analysis showed that factors that influenced surface and interfacial tension played an important role in this transformation process. It is suggested that such transformation may, for example, result in a delayed evaporation of composite particles with significant water component, leading to observable effects in evolution of combustion aerosols (including possible local humidity maximums near a source, such as a busy road). The obtained results will be important for further development and understanding of aerosol physics and technologies, including combustion aerosols and their evolution near a source.
Resumo:
Background: In order to design appropriate environments for performance and learning of movement skills, physical educators need a sound theoretical model of the learner and of processes of learning. In physical education, this type of modelling informs the organization of learning environments and effective and efficient use of practice time. An emerging theoretical framework in motor learning, relevant to physical education, advocates a constraints-led perspective for acquisition of movement skills and game play knowledge. This framework shows how physical educators could use task, performer and environmental constraints to channel acquisition of movement skills and decision making behaviours in learners. From this viewpoint, learners generate specific movement solutions to satisfy the unique combination of constraints imposed on them, a process which can be harnessed during physical education lessons. Purpose: In this paper the aim is to provide an overview of the motor learning approach emanating from the constraints-led perspective, and examine how it can substantiate a platform for a new pedagogical framework in physical education: nonlinear pedagogy. We aim to demonstrate that it is only through theoretically valid and objective empirical work of an applied nature that a conceptually sound nonlinear pedagogy model can continue to evolve and support research in physical education. We present some important implications for designing practices in games lessons, showing how a constraints-led perspective on motor learning could assist physical educators in understanding how to structure learning experiences for learners at different stages, with specific focus on understanding the design of games teaching programmes in physical education, using exemplars from Rugby Union and Cricket. Findings: Research evidence from recent studies examining movement models demonstrates that physical education teachers need a strong understanding of sport performance so that task constraints can be manipulated so that information-movement couplings are maintained in a learning environment that is representative of real performance situations. Physical educators should also understand that movement variability may not necessarily be detrimental to learning and could be an important phenomenon prior to the acquisition of a stable and functional movement pattern. We highlight how the nonlinear pedagogical approach is student-centred and empowers individuals to become active learners via a more hands-off approach to learning. Summary: A constraints-based perspective has the potential to provide physical educators with a framework for understanding how performer, task and environmental constraints shape each individual‟s physical education. Understanding the underlying neurobiological processes present in a constraints-led perspective to skill acquisition and game play can raise awareness of physical educators that teaching is a dynamic 'art' interwoven with the 'science' of motor learning theories.
Resumo:
Research examining post-trauma pathology indicates negative outcomes can differ as a function of the type of trauma experienced. Such research has yet to be published when looking at positive post-trauma changes. Ninety-Four survivors of trauma, forming three groups, completed the Posttraumatic Growth Inventory (PTGI) and Impact of Events Scale-Revised (IES-R). Groups comprised survivors of i) sexual abuse ii) motor vehicle accidents iii) bereavement. Results indicted differences in growth between the groups with the bereaved reporting higher levels of growth than other survivors and sexual abuse survivors demonstrated higher levels of PTSD symptoms than the other groups. However, this did not preclude sexual abuse survivors from also reporting moderate levels of growth. Results are discussed with relation to fostering growth through clinical practice.
Resumo:
Several brain imaging studies have assumed that response conflict is present in Stroop tasks. However, this has not been demonstrated directly. We examined the time-course of stimulus and response conflict resolution in a numerical Stroop task by combining single-trial electro-myography (EMG) and event-related brain potentials (ERP). EMG enabled the direct tracking of response conflict and the peak latency of the P300 ERP wave was used to index stimulus conflict. In correctly responded trials of the incongruent condition EMG detected robust incorrect response hand activation which appeared consistently in single trials. In 50–80% of the trials correct and incorrect response hand activation coincided temporally, while in 20–50% of the trials incorrect hand activation preceded correct hand activation. EMG data provides robust direct evidence for response conflict. However, congruency effects also appeared in the peak latency of the P300 wave which suggests that stimulus conflict also played a role in the Stroop paradigm. Findings are explained by the continuous flow model of information processing: Partially processed task-irrelevant stimulus information can result in stimulus conflict and can prepare incorrect response activity. A robust congruency effect appeared in the amplitude of incongruent vs. congruent ERPs between 330–400 ms, this effect may be related to the activity of the anterior cingulate cortex.
Resumo:
Solid-phase organic chemistry has rapidly expanded in the last decade, and, as a consequence, so has the need for the development of supports that can withstand the extreme conditions required to facilitate some reactions. The authors here prepare a thermally stable, grafted fluoropolymer support (see Figure for an example) in three solvents, and found that the penetration of the graft was greatest in dichloromethane.
Resumo:
International market access for fresh commodities is regulated by international accepted phytosanitary guidelines, the objectives of which are to reduce the biosecurity risk of plant pest and disease movement. Papua New Guinea (PNG) has identified banana as a potential export crop and to help meet international market access requirements, this thesis provides information for the development of a pest risk analysis (PRA) for PNG banana fruit. The PRA is a three step process which first identifies the pests associated with a particular commodity or pathway, then assesses the risk associated with those pests, and finally identifies risk management options for those pests if required. As the first step of the PRA process, I collated a definitive list on the organisms associated with the banana plant in PNG using formal literature, structured interviews with local experts, grey literature and unpublished file material held in PNG field research stations. I identified 112 organisms (invertebrates, vertebrate, pathogens and weeds) associated with banana in PNG, but only 14 of these were reported as commonly requiring management. For these 14 I present detailed information summaries on their known biology and pest impact. A major finding of the review was that of the 14 identified key pests, some research information occurs for 13. The single exception for which information was found to be lacking was Bactrocera musae (Tryon), the banana fly. The lack of information for this widely reported ‘major pest on PNG bananas’ would hinder the development of a PNG banana fruit PRA. For this reason the remainder of the thesis focused on this organism, particularly with respect to generation of information required by the PRA process. Utilising an existing, but previously unanalysed fruit fly trapping database for PNG, I carried out a Geographic Information System analysis of the distribution and abundance of banana in four major regions of PNG. This information is required for a PRA to determine if banana fruit grown in different parts of the country are at different risks from the fly. Results showed that the fly was widespread in all cropping regions and that temperature and rainfall were not significantly correlated with banana fly abundance. Abundance of the fly was significantly correlated (albeit weakly) with host availability. The same analysis was done with four other PNG pest fruit flies and their responses to the environmental factors differed to banana fly and each other. This implies that subsequent PRA analyses for other PNG fresh commodities will need to investigate the risk of each of these flies independently. To quantify the damage to banana fruit caused by banana fly in PNG, local surveys and one national survey of banana fruit infestation were carried out. Contrary to expectations, infestation was found to be very low, particularly in the widely grown commercial cultivar, Cavendish. Infestation of Cavendish fingers was only 0.41% in a structured, national survey of over 2 700 banana fingers. Follow up laboratory studies showed that fingers of Cavendish, and another commercial variety Lady-finger, are very poor hosts for B. musae, with very low host selection rates by female flies and very poor immature survival. An analysis of a recent (within last decade) incursion of B. musae into the Gazelle Peninsula of East New Britain Province, PNG, provided the final set of B. musae data. Surveys of the fly on the peninsular showed that establishment and spread of the fly in the novel environment was very rapid and thus the fly should be regarded as being of high biosecurity concern, at least in tropical areas. Supporting the earlier impact studies, however, banana fly has not become a significant banana fruit problem on the Gazelle, despite bananas being the primary starch staple of the region. The results of the research chapters are combined in the final Discussion in the form of a B. musae focused PRA for PNG banana fruit. Putting the thesis in a broader context, the Discussion also deals with the apparent discrepancy between high local abundance of banana fly and very low infestation rates. This discussion focuses on host utilisation patterns of specialist herbivores and suggests that local pest abundance, as determined by trapping or monitoring, need not be good surrogate for crop damage, despite this linkage being implicit in a number of international phytosanitary protocols.
Resumo:
A point interpolation method with locally smoothed strain field (PIM-LS2) is developed for mechanics problems using a triangular background mesh. In the PIM-LS2, the strain within each sub-cell of a nodal domain is assumed to be the average strain over the adjacent sub-cells of the neighboring element sharing the same field node. We prove theoretically that the energy norm of the smoothed strain field in PIM-LS2 is equivalent to that of the compatible strain field, and then prove that the solution of the PIM- LS2 converges to the exact solution of the original strong form. Furthermore, the softening effects of PIM-LS2 to system and the effects of the number of sub-cells that participated in the smoothing operation on the convergence of PIM-LS2 are investigated. Intensive numerical studies verify the convergence, softening effects and bound properties of the PIM-LS2, and show that the very ‘‘tight’’ lower and upper bound solutions can be obtained using PIM-LS2.
Resumo:
The high moisture content of mill mud (typically 75–80% for Australian factories) results in high transportation costs for the redistribution of mud onto cane farms. The high transportation cost relative to the nutrient value of the mill mud results in many milling companies subsidising the cost of this recycle to ensure a wide distribution across the cane supply area. An average mill would generate about 100 000 t of mud (at 75% moisture) in a crushing season. The development of mud processing facilities that will produce a low moisture mud that can be effectively incorporated into cane land with existing or modified spreading equipment will improve the cost efficiency of mud redistribution to farms; provide an economical fertiliser alternative to more farms in the supply area; and reduce the potential for adverse environmental impacts from farms. A research investigation assessing solid bowl decanter centrifuges to produce low moisture mud with low residual pol was undertaken and the results compared to the performance of existing rotary vacuum filters in factory trials. The decanters were operated on filter mud feed in parallel with the rotary vacuum filters to allow comparisons of performance. Samples of feed, mud product and filtrate were analysed to provide performance indicators. The decanter centrifuge could produce mud cakes with very low moistures and residual pol levels. Spreading trials in cane fields indicated that the dry cake could be spread easily by standard mud trucks and by trucks designed specifically to spread fertiliser.
Resumo:
Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
Background, aim, and scope Urban motor vehicle fleets are a major source of particulate matter pollution, especially of ultrafine particles (diameters < 0.1 µm), and exposure to particulate matter has known serious health effects. A considerable body of literature is available on vehicle particle emission factors derived using a wide range of different measurement methods for different particle sizes, conducted in different parts of the world. Therefore the choice as to which are the most suitable particle emission factors to use in transport modelling and health impact assessments presented as a very difficult task. The aim of this study was to derive a comprehensive set of tailpipe particle emission factors for different vehicle and road type combinations, covering the full size range of particles emitted, which are suitable for modelling urban fleet emissions. Materials and methods A large body of data available in the international literature on particle emission factors for motor vehicles derived from measurement studies was compiled and subjected to advanced statistical analysis, to determine the most suitable emission factors to use in modelling urban fleet emissions. Results This analysis resulted in the development of five statistical models which explained 86%, 93%, 87%, 65% and 47% of the variation in published emission factors for particle number, particle volume, PM1, PM2.5 and PM10 respectively. A sixth model for total particle mass was proposed but no significant explanatory variables were identified in the analysis. From the outputs of these statistical models, the most suitable particle emission factors were selected. This selection was based on examination of the statistical robustness of the statistical model outputs, including consideration of conservative average particle emission factors with the lowest standard errors, narrowest 95% confidence intervals and largest sample sizes, and the explanatory model variables, which were Vehicle Type (all particle metrics), Instrumentation (particle number and PM2.5), Road Type (PM10) and Size Range Measured and Speed Limit on the Road (particle volume). Discussion A multiplicity of factors need to be considered in determining emission factors that are suitable for modelling motor vehicle emissions, and this study derived a set of average emission factors suitable for quantifying motor vehicle tailpipe particle emissions in developed countries. Conclusions The comprehensive set of tailpipe particle emission factors presented in this study for different vehicle and road type combinations enable the full size range of particles generated by fleets to be quantified, including ultrafine particles (measured in terms of particle number). These emission factors have particular application for regions which may have a lack of funding to undertake measurements, or insufficient measurement data upon which to derive emission factors for their region. Recommendations and perspectives In urban areas motor vehicles continue to be a major source of particulate matter pollution and of ultrafine particles. It is critical that in order to manage this major pollution source methods are available to quantify the full size range of particles emitted for traffic modelling and health impact assessments.