984 resultados para effort model
Resumo:
The purpose of this study was to investigate the effect a human link through the One World Youth Project has on a global education program, if a human connection through the program enhances a student's ability to develop a critical consciousness of global issues, and the etTectiveness of thc constructivist-based Driver Model of Curriculum Development, which served as the curriculum model in this study. An action based research cycle was chosen as this study's research methodology and incorporated 5 qualitative data collection instruments: a) interviews and questionnaires, b) artifacts, c) teacher journal, d) critical friend's observation forms, and e) my critical friend's postobservation interviews. The data were conected from 4 student participants and my critical friend during all stages of the action research cycle. The results of this study provide educators with data on the impact of human connections in a global education program, the effects these connections have on students, and the effectiveness of the Driver Model of Curriculum Development. This study also provides practical activities and strategies that could be used by educators to develop their own global education programs. The United Nations drafted the Millennium Development Goals in an effort to improve the lives of billions of people across the globe. The eight goals were developed with the support of all member nations since all human beings are global citizens who have a responsibility to make the world a better place. Students need to develop a critical consciousness of global issues so that they can work with others to eliminate them. Students who are taught to restate the opinions of others win not be prepared to inherit a world full of challenges that will require new innovative ideas to foster positive change.
Resumo:
The Easy-Play Model is a useful framework for facilitating sport among a diverse group of participants of different ages and ability levels. The model’s focus on de-emphasizing competitiveness in an effort to establish an optimally competitive environment has facilitated positive play experiences. This study investigated the experiences of players who have been a part of a weekly soccer program implementing the Easy-Play Model. In-depth interviews of 8 participants provided insight concerning the benefits and weaknesses of the approach and the notable experiences of the players. Results provided data confirming the model’s effectiveness in facilitating positive social interactions, safe play experiences where injury is generally a negligible concern, and productive opportunities to be physically active through sport. This study of the Easy-Play Model sets the foundation for future research which should further add to our understanding of productive ways to engage people in physical activity through sport.
Resumo:
Ethnographic methods were used to study a weekly after-school physical activity program over an eight-month period. Based on Hellison’s Teaching Personal and Social Responsibility (TPSR) model, the program sought to foster positive life skills amongst youth. The study investigated how the developed program influenced this life skills education experience. Several themes were identified from the data revolving around culture, life skills, pedagogy, and lessons learned. Data suggests that the positive environment developed within the program positively influenced youths’ life skill education experience. The topic of ethnicity as it relates to the experience of marginalized youth in physical activity settings is also discussed. This study supports TPSR literature and suggests that effort to establish caring relationships and empower youth contribute to the establishment of a positive atmosphere where life skills education can occur. Beyond this, practical tools were developed through this study to help others deliver life skill education.
Resumo:
The academic study of place has been generally defined by two distinct and highly refined discourses within outdoor recreation research: place attachment and sense of place. Place attachment generally describes the intensity of the place relationship, whereas sense of place approaches place from a more holistic and intimate orientation. This study bridges these two methodological and theoretical separate areas of place research together by re-conceptualizing the way in which place relationships are viewed within outdoor recreation research. The Psychological Continuum Model is used to extend the language of place attachment to incorporate more of the philosophy of sense of place while attending to the empirical strength and utility of place attachment. This extension results in the term place allegiance being coined to depict the strong and profound relationships outdoor recreationists build with their places of outdoor recreation. Using a concurrent mixed methods research design, this study explored place allegiance via an online survey (n = 437) and thirteen in-depth qualitative interviews with outdoor recreationists. Results indicate that place allegiance can be measured through a multi-dimensional model of place allegiance that incorporates behaviours, importance, resistance, knowledge and symbolic value. In addition, place allegiance was found to be related to an individual's influence on life course and his/her willingness to exhibit preservation and protection tendencies. Place allegiance plays an important role in acknowledging the importance of authentic place relationships in an effort to confront placelessness. Wilderness recreation is an important avenue for outdoor recreationists to build strong place relationships.
Resumo:
Il est généralement accepté que les lits vasculaires oculaires auraient la faculté d’autoréguler leur apport sanguin afin de contrebalancer les variations de pression de perfusion oculaire (PPO). Plusieurs études ont tenté d’évaluer ce mécanisme en mesurant les effets d’une variation de la PPO - induite par un exercice ou par une augmentation de la pression intra-oculaire (PIO) à l’aide d’une suction sclérale - sur le débit sanguin oculaire (DSO). Or, les méthodes de mesure du DSO utilisées jusqu'à maintenant présentent de nombreux désavantages et limites, ce qui rend difficile leur usage clinique. De récents développements dans le domaine des investigations non-invasives des paramètres sanguins oculaires proposent un modèle capable de mesurer en temps réel la concentration en oxygène, un autre paramètre important du métabolisme rétinien. Dans le cadre de la présente étude, ce nouveau modèle est utilisé afin de mesurer les effets d’un effort physique dynamique sur la concentration d’oxygène dans les capillaires de la tête du nerf optique (COTNO) de sujets jeunes et en santé. Six jeunes hommes non fumeurs ont participé à l’étude. L’effort physique dynamique consistait en une séance de bicyclette stationnaire de 15 minutes menant à une augmentation du pouls à 160 battements par minute. La COTNO était mesurée avant et immédiatement après la séance d’exercice. La pression artérielle (PA) et la PIO étaient mesurées ponctuellement alors que le pouls et la saturation sanguine en oxygène (SpO2) au niveau digital étaient mesurés tout au long de l’expérience. L’effort physique a entrainé une réduction de la PIO chez tous les sujets, une réduction de la COTNO chez tous les sujets sauf un tandis que la SpO2 demeura constante chez tous les sujets. Une corrélation quadratique entre les variations de la PIO et de la COTNO a pu être notée. Ces résultats suggèrent une corrélation directe entre les variations de la COTNO et celles de la PPO et de la PA. Les résultats de la présente étude suggèrent que les variations de la COTNO chez un sujet en santé suite à un effort physique dynamique pourraient représenter sa capacité à compenser un tel effort. De plus, les changements métaboliques sanguins induits par l’effort physique dynamique pourraient représenter une cause commune aux variations de la PIO et de la COTNO.
Resumo:
This thesis presents the methodology of linking Total Productive Maintenance (TPM) and Quality Function Deployment (QFD). The Synergic power ofTPM and QFD led to the formation of a new maintenance model named Maintenance Quality Function Deployment (MQFD). This model was found so powerful that, it could overcome the drawbacks of TPM, by taking care of customer voices. Those voices of customers are used to develop the house of quality. The outputs of house of quality, which are in the form of technical languages, are submitted to the top management for making strategic decisions. The technical languages, which are concerned with enhancing maintenance quality, are strategically directed by the top management towards their adoption of eight TPM pillars. The TPM characteristics developed through the development of eight pillars are fed into the production system, where their implementation is focused towards increasing the values of the maintenance quality parameters, namely overall equipment efficiency (GEE), mean time between failures (MTBF), mean time to repair (MTIR), performance quality, availability and mean down time (MDT). The outputs from production system are required to be reflected in the form of business values namely improved maintenance quality, increased profit, upgraded core competence, and enhanced goodwill. A unique feature of the MQFD model is that it is not necessary to change or dismantle the existing process ofdeveloping house ofquality and TPM projects, which may already be under practice in the company concerned. Thus, the MQFD model enables the tactical marriage between QFD and TPM.First, the literature was reviewed. The results of this review indicated that no activities had so far been reported on integrating QFD in TPM and vice versa. During the second phase, a survey was conducted in six companies in which TPM had been implemented. The objective of this survey was to locate any traces of QFD implementation in TPM programme being implemented in these companies. This survey results indicated that no effort on integrating QFD in TPM had been made in these companies. After completing these two phases of activities, the MQFD model was designed. The details of this work are presented in this research work. Followed by this, the explorative studies on implementing this MQFD model in real time environments were conducted. In addition to that, an empirical study was carried out to examine the receptivity of MQFD model among the practitioners and multifarious organizational cultures. Finally, a sensitivity analysis was conducted to find the hierarchy of various factors influencing MQFD in a company. Throughout the research work, the theory and practice of MQFD were juxtaposed by presenting and publishing papers among scholarly communities and conducting case studies in real time scenario.
Resumo:
In the present paper we concentrate on solving sequences of nonsymmetric linear systems with block structure arising from compressible flow problems. We attempt to improve the solution process by sharing part of the computational effort throughout the sequence. This is achieved by application of a cheap updating technique for preconditioners which we adapted in order to be used for our applications. Tested on three benchmark compressible flow problems, the strategy speeds up the entire computation with an acceleration being particularly pronounced in phases of instationary behavior.
Resumo:
To study the behaviour of beam-to-column composite connection more sophisticated finite element models is required, since component model has some severe limitations. In this research a generic finite element model for composite beam-to-column joint with welded connections is developed using current state of the art local modelling. Applying mechanically consistent scaling method, it can provide the constitutive relationship for a plane rectangular macro element with beam-type boundaries. Then, this defined macro element, which preserves local behaviour and allows for the transfer of five independent states between local and global models, can be implemented in high-accuracy frame analysis with the possibility of limit state checks. In order that macro element for scaling method can be used in practical manner, a generic geometry program as a new idea proposed in this study is also developed for this finite element model. With generic programming a set of global geometric variables can be input to generate a specific instance of the connection without much effort. The proposed finite element model generated by this generic programming is validated against testing results from University of Kaiserslautern. Finally, two illustrative examples for applying this macro element approach are presented. In the first example how to obtain the constitutive relationships of macro element is demonstrated. With certain assumptions for typical composite frame the constitutive relationships can be represented by bilinear laws for the macro bending and shear states that are then coupled by a two-dimensional surface law with yield and failure surfaces. In second example a scaling concept that combines sophisticated local models with a frame analysis using a macro element approach is presented as a practical application of this numerical model.
Resumo:
Despite the many models developed for phosphorus concentration prediction at differing spatial and temporal scales, there has been little effort to quantify uncertainty in their predictions. Model prediction uncertainty quantification is desirable, for informed decision-making in river-systems management. An uncertainty analysis of the process-based model, integrated catchment model of phosphorus (INCA-P), within the generalised likelihood uncertainty estimation (GLUE) framework is presented. The framework is applied to the Lugg catchment (1,077 km2), a River Wye tributary, on the England–Wales border. Daily discharge and monthly phosphorus (total reactive and total), for a limited number of reaches, are used to initially assess uncertainty and sensitivity of 44 model parameters, identified as being most important for discharge and phosphorus predictions. This study demonstrates that parameter homogeneity assumptions (spatial heterogeneity is treated as land use type fractional areas) can achieve higher model fits, than a previous expertly calibrated parameter set. The model is capable of reproducing the hydrology, but a threshold Nash-Sutcliffe co-efficient of determination (E or R 2) of 0.3 is not achieved when simulating observed total phosphorus (TP) data in the upland reaches or total reactive phosphorus (TRP) in any reach. Despite this, the model reproduces the general dynamics of TP and TRP, in point source dominated lower reaches. This paper discusses why this application of INCA-P fails to find any parameter sets, which simultaneously describe all observed data acceptably. The discussion focuses on uncertainty of readily available input data, and whether such process-based models should be used when there isn’t sufficient data to support the many parameters.
Resumo:
The problem of modeling solar energetic particle (SEP) events is important to both space weather research and forecasting, and yet it has seen relatively little progress. Most important SEP events are associated with coronal mass ejections (CMEs) that drive coronal and interplanetary shocks. These shocks can continuously produce accelerated particles from the ambient medium to well beyond 1 AU. This paper describes an effort to model real SEP events using a Center for Integrated Space weather Modeling (CISM) MHD solar wind simulation including a cone model of CMEs to initiate the related shocks. In addition to providing observation-inspired shock geometry and characteristics, this MHD simulation describes the time-dependent observer field line connections to the shock source. As a first approximation, we assume a shock jump-parameterized source strength and spectrum, and that scatter-free transport occurs outside of the shock source, thus emphasizing the role the shock evolution plays in determining the modeled SEP event profile. Three halo CME events on May 12, 1997, November 4, 1997 and December 13, 2006 are used to test the modeling approach. While challenges arise in the identification and characterization of the shocks in the MHD model results, this approach illustrates the importance to SEP event modeling of globally simulating the underlying heliospheric event. The results also suggest the potential utility of such a model for forcasting and for interpretation of separated multipoint measurements such as those expected from the STEREO mission.
Resumo:
One of the primary goals of the Center for Integrated Space Weather Modeling (CISM) effort is to assess and improve prediction of the solar wind conditions in near‐Earth space, arising from both quasi‐steady and transient structures. We compare 8 years of L1 in situ observations to predictions of the solar wind speed made by the Wang‐Sheeley‐Arge (WSA) empirical model. The mean‐square error (MSE) between the observed and model predictions is used to reach a number of useful conclusions: there is no systematic lag in the WSA predictions, the MSE is found to be highest at solar minimum and lowest during the rise to solar maximum, and the optimal lead time for 1 AU solar wind speed predictions is found to be 3 days. However, MSE is shown to frequently be an inadequate “figure of merit” for assessing solar wind speed predictions. A complementary, event‐based analysis technique is developed in which high‐speed enhancements (HSEs) are systematically selected and associated from observed and model time series. WSA model is validated using comparisons of the number of hit, missed, and false HSEs, along with the timing and speed magnitude errors between the forecasted and observed events. Morphological differences between the different HSE populations are investigated to aid interpretation of the results and improvements to the model. Finally, by defining discrete events in the time series, model predictions from above and below the ecliptic plane can be used to estimate an uncertainty in the predicted HSE arrival times.
Resumo:
Intercontinental Transport of Ozone and Precursors (ITOP) (part of International Consortium for Atmospheric Research on Transport and Transformation (ICARTT)) was an intense research effort to measure long-range transport of pollution across the North Atlantic and its impact on O3 production. During the aircraft campaign plumes were encountered containing large concentrations of CO plus other tracers and aerosols from forest fires in Alaska and Canada. A chemical transport model, p-TOMCAT, and new biomass burning emissions inventories are used to study the emissions long-range transport and their impact on the troposphere O3 budget. The fire plume structure is modeled well over long distances until it encounters convection over Europe. The CO values within the simulated plumes closely match aircraft measurements near North America and over the Atlantic and have good agreement with MOPITT CO data. O3 and NOx values were initially too great in the model plumes. However, by including additional vertical mixing of O3 above the fires, and using a lower NO2/CO emission ratio (0.008) for boreal fires, O3 concentrations are reduced closer to aircraft measurements, with NO2 closer to SCIAMACHY data. Too little PAN is produced within the simulated plumes, and our VOC scheme's simplicity may be another reason for O3 and NOx model-data discrepancies. In the p-TOMCAT simulations the fire emissions lead to increased tropospheric O3 over North America, the north Atlantic and western Europe from photochemical production and transport. The increased O3 over the Northern Hemisphere in the simulations reaches a peak in July 2004 in the range 2.0 to 6.2 Tg over a baseline of about 150 Tg.
Resumo:
The control of fishing mortality via fishing effort remains fundamental to most fisheries management strategies even at the local community or co-management level. Decisions to support such strategies require knowledge of the underlying response of the catch to changes in effort. Even under adaptive management strategies, imprecise knowledge of the response is likely to help accelerate the adaptive learning process. Data and institutional capacity requirements to employ multi-species biomass dynamics and age-structured models invariably render their use impractical particularly in less developed regions of the world. Surplus production models fitted to catch and effort data aggregated across all species offer viable alternatives. The current paper seeks models of this type that best describe the multi-species catch–effort responses in floodplain-rivers, lakes and reservoirs and reef-based fisheries based upon among fishery comparisons, building on earlier work. Three alternative surplus production models were fitted to estimates of catch per unit area (CPUA) and fisher density for 258 fisheries in Africa, Asia and South America. In all cases examined, the best or equal best fitting model was the Fox type, explaining up to 90% of the variation in CPUA. For lake and reservoir fisheries in Africa and Asia, the Schaefer and an asymptotic model fitted equally well. The Fox model estimates of fisher density (fishers km−2) at maximum yield (iMY) for floodplain-rivers, African lakes and reservoirs and reef-based fisheries are 13.7 (95% CI [11.8, 16.4]); 27.8 (95% CI [17.5, 66.7]) and 643 (95% CI [459,1075]), respectively and compare well with earlier estimates. Corresponding estimates of maximum yield are also given. The significantly higher value of iMY for reef-based fisheries compared to estimates for rivers and lakes reflects the use of a different measure of fisher density based upon human population size estimates. The models predict that maximum yield is achieved at a higher fishing intensity in Asian lakes compared to those in Africa. This may reflect the common practice in Asia of stocking lakes to augment natural recruitment. Because of the equilibrium assumptions underlying the models, all the estimates of maximum yield and corresponding levels of effort should be treated with caution.
Resumo:
We have developed a model that allows players in the building and construction sector and the energy policy makers on energy strategies to be able to perceive the interest of investors in the kingdom of Bahrain in conducting Building Integrated Photovoltaic (BIPV) or Building integrated wind turbines (BIWT) projects, i.e. a partial sustainable or green buildings. The model allows the calculation of the Sustainable building index (SBI), which ranges from 0.1 (lowest) to 1.0 (highest); the higher figure the more chance for launching BIPV or BIWT. This model was tested in Bahrain and the calculated SBI was found 0.47. This means that an extensive effort must be made through policies on renewable energy, renewable energy education, and incentives to BIPV and BIWT projects, environmental awareness and promotion to clean and sustainable energy for building and construction projects. Our model can be used internationally to create a "Global SBI" database. The Sustainable building and construction initiative (SBCI), United Nation, can take the task for establishing such task using this model.
Resumo:
An automatic nonlinear predictive model-construction algorithm is introduced based on forward regression and the predicted-residual-sums-of-squares (PRESS) statistic. The proposed algorithm is based on the fundamental concept of evaluating a model's generalisation capability through crossvalidation. This is achieved by using the PRESS statistic as a cost function to optimise model structure. In particular, the proposed algorithm is developed with the aim of achieving computational efficiency, such that the computational effort, which would usually be extensive in the computation of the PRESS statistic, is reduced or minimised. The computation of PRESS is simplified by avoiding a matrix inversion through the use of the orthogonalisation procedure inherent in forward regression, and is further reduced significantly by the introduction of a forward-recursive formula. Based on the properties of the PRESS statistic, the proposed algorithm can achieve a fully automated procedure without resort to any other validation data set for iterative model evaluation. Numerical examples are used to demonstrate the efficacy of the algorithm.