964 resultados para Load model
Resumo:
Traditional psychometric theory and practice classify people according to broad ability dimensions but do not examine how these mental processes occur. Hunt and Lansman (1975) proposed a 'distributed memory' model of cognitive processes with emphasis on how to describe individual differences based on the assumption that each individual possesses the same components. It is in the quality of these components ~hat individual differences arise. Carroll (1974) expands Hunt's model to include a production system (after Newell and Simon, 1973) and a response system. He developed a framework of factor analytic (FA) factors for : the purpose of describing how individual differences may arise from them. This scheme is to be used in the analysis of psychometric tes ts . Recent advances in the field of information processing are examined and include. 1) Hunt's development of differences between subjects designated as high or low verbal , 2) Miller's pursuit of the magic number seven, plus or minus two, 3) Ferguson's examination of transfer and abilities and, 4) Brown's discoveries concerning strategy teaching and retardates . In order to examine possible sources of individual differences arising from cognitive tasks, traditional psychometric tests were searched for a suitable perceptual task which could be varied slightly and administered to gauge learning effects produced by controlling independent variables. It also had to be suitable for analysis using Carroll's f ramework . The Coding Task (a symbol substitution test) found i n the Performance Scale of the WISe was chosen. Two experiments were devised to test the following hypotheses. 1) High verbals should be able to complete significantly more items on the Symbol Substitution Task than low verbals (Hunt, Lansman, 1975). 2) Having previous practice on a task, where strategies involved in the task may be identified, increases the amount of output on a similar task (Carroll, 1974). J) There should be a sUbstantial decrease in the amount of output as the load on STM is increased (Miller, 1956) . 4) Repeated measures should produce an increase in output over trials and where individual differences in previously acquired abilities are involved, these should differentiate individuals over trials (Ferguson, 1956). S) Teaching slow learners a rehearsal strategy would improve their learning such that their learning would resemble that of normals on the ,:same task. (Brown, 1974). In the first experiment 60 subjects were d.ivided·into high and low verbal, further divided randomly into a practice group and nonpractice group. Five subjects in each group were assigned randomly to work on a five, seven and nine digit code throughout the experiment. The practice group was given three trials of two minutes each on the practice code (designed to eliminate transfer effects due to symbol similarity) and then three trials of two minutes each on the actual SST task . The nonpractice group was given three trials of two minutes each on the same actual SST task . Results were analyzed using a four-way analysis of variance . In the second experiment 18 slow learners were divided randomly into two groups. one group receiving a planned strategy practioe, the other receiving random practice. Both groups worked on the actual code to be used later in the actual task. Within each group subjects were randomly assigned to work on a five, seven or nine digit code throughout. Both practice and actual tests consisted on three trials of two minutes each. Results were analyzed using a three-way analysis of variance . It was found in t he first experiment that 1) high or low verbal ability by itself did not produce significantly different results. However, when in interaction with the other independent variables, a difference in performance was noted . 2) The previous practice variable was significant over all segments of the experiment. Those who received previo.us practice were able to score significantly higher than those without it. J) Increasing the size of the load on STM severely restricts performance. 4) The effect of repeated trials proved to be beneficial. Generally, gains were made on each successive trial within each group. S) In the second experiment, slow learners who were allowed to practice randomly performed better on the actual task than subjeots who were taught the code by means of a planned strategy. Upon analysis using the Carroll scheme, individual differences were noted in the ability to develop strategies of storing, searching and retrieving items from STM, and in adopting necessary rehearsals for retention in STM. While these strategies may benef it some it was found that for others they may be harmful . Temporal aspects and perceptual speed were also found to be sources of variance within individuals . Generally it was found that the largest single factor i nfluencing learning on this task was the repeated measures . What e~ables gains to be made, varies with individuals . There are environmental factors, specific abilities, strategy development, previous learning, amount of load on STM , perceptual and temporal parameters which influence learning and these have serious implications for educational programs .
Resumo:
Models developed to identify the rates and origins of nutrient export from land to stream require an accurate assessment of the nutrient load present in the water body in order to calibrate model parameters and structure. These data are rarely available at a representative scale and in an appropriate chemical form except in research catchments. Observational errors associated with nutrient load estimates based on these data lead to a high degree of uncertainty in modelling and nutrient budgeting studies. Here, daily paired instantaneous P and flow data for 17 UK research catchments covering a total of 39 water years (WY) have been used to explore the nature and extent of the observational error associated with nutrient flux estimates based on partial fractions and infrequent sampling. The daily records were artificially decimated to create 7 stratified sampling records, 7 weekly records, and 30 monthly records from each WY and catchment. These were used to evaluate the impact of sampling frequency on load estimate uncertainty. The analysis underlines the high uncertainty of load estimates based on monthly data and individual P fractions rather than total P. Catchments with a high baseflow index and/or low population density were found to return a lower RMSE on load estimates when sampled infrequently than those with a tow baseflow index and high population density. Catchment size was not shown to be important, though a limitation of this study is that daily records may fail to capture the full range of P export behaviour in smaller catchments with flashy hydrographs, leading to an underestimate of uncertainty in Load estimates for such catchments. Further analysis of sub-daily records is needed to investigate this fully. Here, recommendations are given on load estimation methodologies for different catchment types sampled at different frequencies, and the ways in which this analysis can be used to identify observational error and uncertainty for model calibration and nutrient budgeting studies. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
A size-structured plant population model is developed to study the evolution of pathogen-induced leaf shedding under various environmental conditions. The evolutionary stable strategy (ESS) of the leaf shedding rate is determined for two scenarios: i) a constant leaf shedding strategy and ii) an infection load driven leaf shedding strategy. The model predicts that ESS leaf shedding rates increase with nutrient availability. No effect of plant density on the ESS leaf shedding rate is found even though disease severity increases with plant density. When auto-infection, that is increased infection due to spores produced on the plant itself, plays a key role in further disease increase on the plant, shedding leaves removes disease that would otherwise contribute to disease increase on the plant itself. Consequently leaf shedding responses to infections may evolve. When external infection, that is infection due to immigrant spores, is the key determinant, shedding a leaf does not reduce the force of infection on the leaf shedding plant. In this case leaf shedding will not evolve. Under a low external disease pressure adopting an infection driven leaf shedding strategy is more efficient than adopting a constant leaf shedding strategy, since a plant adopting an infection driven leaf shedding strategy does not shed any leaves in the absence of infection, even when leaf shedding rates are high. A plant adopting a constant leaf shedding rate sheds the same amount of leaves regardless of the presence of infection. Based on the results we develop two hypotheses that can be tested if the appropriate plant material is available.
Resumo:
A size-structured plant population model is developed to study the evolution of pathogen-induced leaf shedding under various environmental conditions. The evolutionary stable strategy (ESS) of the leaf shedding rate is determined for two scenarios: i) a constant leaf shedding strategy and ii) an infection load driven leaf shedding strategy. The model predicts that ESS leaf shedding rates increase with nutrient availability. No effect of plant density on the ESS leaf shedding rate is found even though disease severity increases with plant density. When auto-infection, that is increased infection due to spores produced on the plant itself, plays a key role in further disease increase on the plant, shedding leaves removes disease that would otherwise contribute to disease increase on the plant itself. Consequently leaf shedding responses to infections may evolve. When external infection, that is infection due to immigrant spores, is the key determinant, shedding a leaf does not reduce the force of infection on the leaf shedding plant. In this case leaf shedding will not evolve. Under a low external disease pressure adopting an infection driven leaf shedding strategy is more efficient than adopting a constant leaf shedding strategy, since a plant adopting an infection driven leaf shedding strategy does not shed any leaves in the absence of infection, even when leaf shedding rates are high. A plant adopting a constant leaf shedding rate sheds the same amount of leaves regardless of the presence of infection. Based on the results we develop two hypotheses that can be tested if the appropriate plant material is available.
Resumo:
IPLV overall coefficient, presented by Air-Conditioning and Refrigeration Institute (ARI) of America, shows running/operation status of air-conditioning system host only. For overall operation coefficient, logical solution has not been developed, to reflect the whole air-conditioning system under part load. In this research undertaking, the running time proportions of air-conditioning systems under part load have been obtained through analysis on energy consumption data during practical operation in all public buildings in Chongqing. This was achieved by using analysis methods, based on the statistical energy consumption data distribution of public buildings month-by-month. Comparing with the weight number of IPLV, part load operation coefficient of air-conditioning system, based on this research, does not only show the status of system refrigerating host, but also reflects and calculate energy efficiency of the whole air-conditioning system. The coefficient results from the processing and analyzing of practical running data, shows the practical running status of area and building type (actual and objective) – not clear. The method is different from model analysis which gets IPLV weight number, in the sense that this method of coefficient results in both four equal proportions and also part load operation coefficient of air-conditioning system under any load rate as necessary.
Resumo:
There are varieties of physical and behavioral factors to determine energy demand load profile. The attainment of the optimum mix of measures and renewable energy system deployment requires a simple method suitable for using at the early design stage. A simple method of formulating load profile (SMLP) for UK domestic buildings has been presented in this paper. Domestic space heating load profile for different types of houses have been produced using thermal dynamic model which has been developed using thermal resistant network method. The daily breakdown energy demand load profile of appliance, domestic hot water and space heating can be predicted using this method. The method can produce daily load profile from individual house to urban community. It is suitable to be used at Renewable energy system strategic design stage.
Resumo:
The development of an Artificial Neural Network model of UK domestic appliance energy consumption is presented. The model uses diary-style appliance use data and a survey questionnaire collected from 51 households during the summer of 2010. It also incorporates measured energy data and is sensitive to socioeconomic, physical dwelling and temperature variables. A prototype model is constructed in MATLAB using a two layer feed forward network with backpropagation training and has a12:10:24architecture.Model outputs include appliance load profiles which can be applied to the fields of energy planning (micro renewables and smart grids), building simulation tools and energy policy.
Resumo:
A manageable, relatively inexpensive model was constructed to predict the loss of nitrogen and phosphorus from a complex catchment to its drainage system. The model used an export coefficient approach, calculating the total nitrogen (N) and total phosphorus (P) load delivered annually to a water body as the sum of the individual loads exported from each nutrient source in its catchment. The export coefficient modelling approach permits scaling up from plot-scale experiments to the catchment scale, allowing application of findings from field experimental studies at a suitable scale for catchment management. The catchment of the River Windrush, a tributary of the River Thames, UK, was selected as the initial study site. The Windrush model predicted nitrogen and phosphorus loading within 2% of observed total nitrogen load and 0.5% of observed total phosphorus load in 1989. The export coefficient modelling approach was then validated by application in a second research basin, the catchment of Slapton Ley, south Devon, which has markedly different catchment hydrology and land use. The Slapton model was calibrated within 2% of observed total nitrogen load and 2.5% of observed total phosphorus load in 1986. Both models proved sensitive to the impact of temporal changes in land use and management on water quality in both catchments, and were therefore used to evaluate the potential impact of proposed pollution control strategies on the nutrient loading delivered to the River Windrush and Slapton Ley
Resumo:
This paper presents an in-depth critical discussion and derivation of a detailed small-signal analysis of the Phase-Shifted Full-Bridge (PSFB) converter. Circuit parasitics, resonant inductance and transformer turns ratio have all been taken into account in the evaluation of this topology’s open-loop control-to-output, line-to-output and load-to-output transfer functions. Accordingly, the significant impact of losses and resonant inductance on the converter’s transfer functions is highlighted. The enhanced dynamic model proposed in this paper enables the correct design of the converter compensator, including the effect of parasitics on the dynamic behavior of the PSFB converter. Detailed experimental results for a real-life 36V-to-14V/10A PSFB industrial application show excellent agreement with the predictions from the model proposed herein.1
Resumo:
There is large diversity in simulated aerosol forcing among models that participated in the fifth Coupled Model Intercomparison Project (CMIP5), particularly related to aerosol interactions with clouds. Here we use the reported model data and fitted aerosol-cloud relations to separate the main sources of inter-model diversity in the magnitude of the cloud albedo effect. There is large diversity in the global load and spatial distribution of sulfate aerosol, as well as in global-mean cloud-top effective radius. The use of different parameterizations of aerosol-cloud interactions makes the largest contribution to diversity in modeled radiative forcing (up to -39%, +48% about the mean estimate). Uncertainty in pre-industrial sulfate load also makes a substantial contribution (-15%, +61% about the mean estimate), with smaller contributions from inter-model differences in the historical change in sulfate load and in mean cloud fraction.
Resumo:
More and more households are purchasing electric vehicles (EVs), and this will continue as we move towards a low carbon future. There are various projections as to the rate of EV uptake, but all predict an increase over the next ten years. Charging these EVs will produce one of the biggest loads on the low voltage network. To manage the network, we must not only take into account the number of EVs taken up, but where on the network they are charging, and at what time. To simulate the impact on the network from high, medium and low EV uptake (as outlined by the UK government), we present an agent-based model. We initialise the model to assign an EV to a household based on either random distribution or social influences - that is, a neighbour of an EV owner is more likely to also purchase an EV. Additionally, we examine the effect of peak behaviour on the network when charging is at day-time, night-time, or a mix of both. The model is implemented on a neighbourhood in south-east England using smart meter data (half hourly electricity readings) and real life charging patterns from an EV trial. Our results indicate that social influence can increase the peak demand on a local level (street or feeder), meaning that medium EV uptake can create higher peak demand than currently expected.
Resumo:
Climate controls fire regimes through its influence on the amount and types of fuel present and their dryness. CO2 concentration constrains primary production by limiting photosynthetic activity in plants. However, although fuel accumulation depends on biomass production, and hence on CO2 concentration, the quantitative relationship between atmospheric CO2 concentration and biomass burning is not well understood. Here a fire-enabled dynamic global vegetation model (the Land surface Processes and eXchanges model, LPX) is used to attribute glacial–interglacial changes in biomass burning to an increase in CO2, which would be expected to increase primary production and therefore fuel loads even in the absence of climate change, vs. climate change effects. Four general circulation models provided last glacial maximum (LGM) climate anomalies – that is, differences from the pre-industrial (PI) control climate – from the Palaeoclimate Modelling Intercomparison Project Phase~2, allowing the construction of four scenarios for LGM climate. Modelled carbon fluxes from biomass burning were corrected for the model's observed prediction biases in contemporary regional average values for biomes. With LGM climate and low CO2 (185 ppm) effects included, the modelled global flux at the LGM was in the range of 1.0–1.4 Pg C year-1, about a third less than that modelled for PI time. LGM climate with pre-industrial CO2 (280 ppm) yielded unrealistic results, with global biomass burning fluxes similar to or even greater than in the pre-industrial climate. It is inferred that a substantial part of the increase in biomass burning after the LGM must be attributed to the effect of increasing CO2 concentration on primary production and fuel load. Today, by analogy, both rising CO2 and global warming must be considered as risk factors for increasing biomass burning. Both effects need to be included in models to project future fire risks.
Resumo:
This paper reviews the literature concerning the practice of using Online Analytical Processing (OLAP) systems to recall information stored by Online Transactional Processing (OLTP) systems. Such a review provides a basis for discussion on the need for the information that are recalled through OLAP systems to maintain the contexts of transactions with the data captured by the respective OLTP system. The paper observes an industry trend involving the use of OLTP systems to process information into data, which are then stored in databases without the business rules that were used to process information and data stored in OLTP databases without associated business rules. This includes the necessitation of a practice, whereby, sets of business rules are used to extract, cleanse, transform and load data from disparate OLTP systems into OLAP databases to support the requirements for complex reporting and analytics. These sets of business rules are usually not the same as business rules used to capture data in particular OLTP systems. The paper argues that, differences between the business rules used to interpret these same data sets, risk gaps in semantics between information captured by OLTP systems and information recalled through OLAP systems. Literature concerning the modeling of business transaction information as facts with context as part of the modelling of information systems were reviewed to identify design trends that are contributing to the design quality of OLTP and OLAP systems. The paper then argues that; the quality of OLTP and OLAP systems design has a critical dependency on the capture of facts with associated context, encoding facts with contexts into data with business rules, storage and sourcing of data with business rules, decoding data with business rules into the facts with the context and recall of facts with associated contexts. The paper proposes UBIRQ, a design model to aid the co-design of data with business rules storage for OLTP and OLAP purposes. The proposed design model provides the opportunity for the implementation and use of multi-purpose databases, and business rules stores for OLTP and OLAP systems. Such implementations would enable the use of OLTP systems to record and store data with executions of business rules, which will allow for the use of OLTP and OLAP systems to query data with business rules used to capture the data. Thereby ensuring information recalled via OLAP systems preserves the contexts of transactions as per the data captured by the respective OLTP system.
Resumo:
This study compared splinted and non-splinted implant-supported prosthesis with and without a distal proximal contact using a digital image correlation method. An epoxy resin model was made with acrylic resin replicas of a mandibular first premolar and second molar and with threaded implants replacing the second premolar and first molar. Splinted and non-splinted metal-ceramic screw-retained crowns were fabricated and loaded with and without the presence of the second molar. A single-camera measuring system was used to record the in-plane deformation on the model surface at a frequency of 1.0 Hz under a load from 0 to 250 N. The images were then analyzed with specialist software to determine the direct (horizontal) and shear strains along the model. Not splinting the crowns resulted in higher stress transfer to the supporting implants when the second molar replica was absent. The presence of a second molar and an effective interproximal contact contributed to lower stress transfer to the supporting structures even for non-splinted restorations. Shear strains were higher in the region between the molars when the second molar was absent, regardless of splinting. The opposite was found for the region between the implants, which had higher shear strain values when the second molar was present. When an effective distal contact is absent, non-splinted implant-supported restorations introduce higher direct strains to the supporting structures under loading. Shear strains appear to be dependent also on the region within the model, with different regions showing different trends in strain changes in the absence of an effective distal contact. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
In the first part some information and characterisation about an AC distribution network that feeds traction substations and their possible influences on the DC traction load flow are presented. Those influences are investigated and mathematically modelled. To corroborate the mathematical model, an example is presented and their results are confronted with real measurements.