971 resultados para Principal-agent
Resumo:
This study considered the relationship between professional learning, teacher agency and school improvement. Specifically, it explored the principal's role in supporting teacher agency in their professional learning. It found that, with appropriate pressure and support from principals, school improvement for the betterment of student learning is attainable through teacher professional learning that is based 'within' a school. Particularly, it ascertained that schools need to give greater attention to the allocation of time for teacher professional learning, specifically: time before, during and after professional learning activities. Privileging time efficiently and effectively, heightens teacher agency in their learning.
Resumo:
Group interaction within crowds is a common phenomenon and has great influence on pedestrian behaviour. This paper investigates the impact of passenger group dynamics using an agent-based simulation method for the outbound passenger process at airports. Unlike most passenger-flow models that treat passengers as individual agents, the proposed model additionally incorporates their group dynamics as well. The simulation compares passenger behaviour at airport processes and discretionary services under different group formations. Results from experiments (both qualitative and quantitative) show that incorporating group attributes, in particular, the interactions with fellow travellers and wavers can have significant influence on passengers activity preference as well as the performance and utilisation of services in airport terminals. The model also provides a convenient way to investigate the effectiveness of airport space design and service allocations, which can contribute to positive passenger experiences. The model was created using AnyLogic software and its parameters were initialised using recent research data published in the literature.
Resumo:
PURPOSE: This paper describes dynamic agent composition, used to support the development of flexible and extensible large-scale agent-based models (ABMs). This approach was motivated by a need to extend and modify, with ease, an ABM with an underlying networked structure as more information becomes available. Flexibility was also sought after so that simulations are set up with ease, without the need to program. METHODS: The dynamic agent composition approach consists in having agents, whose implementation has been broken into atomic units, come together at runtime to form the complex system representation on which simulations are run. These components capture information at a fine level of detail and provide a vast range of combinations and options for a modeller to create ABMs. RESULTS: A description of the dynamic agent composition is given in this paper, as well as details about its implementation within MODAM (MODular Agent-based Model), a software framework which is applied to the planning of the electricity distribution network. Illustrations of the implementation of the dynamic agent composition are consequently given for that domain throughout the paper. It is however expected that this approach will be beneficial to other problem domains, especially those with a networked structure, such as water or gas networks. CONCLUSIONS: Dynamic agent composition has many advantages over the way agent-based models are traditionally built for the users, the developers, as well as for agent-based modelling as a scientific approach. Developers can extend the model without the need to access or modify previously written code; they can develop groups of entities independently and add them to those already defined to extend the model. Users can mix-and-match already implemented components to form large-scales ABMs, allowing them to quickly setup simulations and easily compare scenarios without the need to program. The dynamic agent composition provides a natural simulation space over which ABMs of networked structures are represented, facilitating their implementation; and verification and validation of models is facilitated by quickly setting up alternative simulations.
Resumo:
This study focuses on trying to understand why the range of experience with respect to HIV infection is so diverse, especially as regards to the latency period. The challenge is to determine what assumptions can be made about the nature of the experience of antigenic invasion and diversity that can be modelled, tested and argued plausibly. To investigate this, an agent-based approach is used to extract high-level behaviour which cannot be described analytically from the set of interaction rules at the cellular level. A prototype model encompasses local variation in baseline properties contributing to the individual disease experience and is included in a network which mimics the chain of lymphatic nodes. Dealing with massively multi-agent systems requires major computational efforts. However, parallelisation methods are a natural consequence and advantage of the multi-agent approach. These are implemented using the MPI library.
Resumo:
Understanding the dynamics of disease spread is of crucial importance, in contexts such as estimating load on medical services to risk assessment and intervention policies against large-scale epidemic outbreaks. However, most of the information is available after the spread itself, and preemptive assessment is far from trivial. Here, we investigate the use of agent-based simulations to model such outbreaks in a stylised urban environment. For most diseases, infection of a new individual may occur from casual contact in crowds as well as from repeated interactions with social partners such as work colleagues or family members. Our model therefore accounts for these two phenomena.Presented in this paper is the initial framework for such a model, detailing implementation of geographical features and generation of social structures. Preliminary results are a promising step towards large-scale simulations and evaluation of potential intervention policies.
Resumo:
Background Recent advances in Immunology highlighted the importance of local properties on the overall progression of HIV infection. In particular, the gastrointestinal tract is seen as a key area during early infection, and the massive cell depletion associated with it may influence subsequent disease progression. This motivated the development of a large-scale agent-based model. Results Lymph nodes are explicitly implemented, and considerations on parallel computing permit large simulations and the inclusion of local features. The results obtained show that GI tract inclusion in the model leads to an accelerated disease progression, during both the early stages and the long-term evolution, compared to a theoretical, uniform model. Conclusions These results confirm the potential of treatment policies currently under investigation, which focus on this region. They also highlight the potential of this modelling framework, incorporating both agent-based and network-based components, in the context of complex systems where scaling-up alone does not result in models providing additional insights.
Resumo:
The three phases of the macroscopic evolution of the HIV infection are well known, but it is still difficult to understand how the cellular-level interactions come together to create this characteristic pattern and, in particular, why there are such differences in individual responses. An 'agent-based' approach is chosen as a means of inferring high-level behaviour from a small set of interaction rules at the cellular level. Here the emphasis is on cell mobility and viral mutations.
Resumo:
Understanding the dynamics of disease spread is essential in contexts such as estimating load on medical services, as well as risk assessment and interven- tion policies against large-scale epidemic outbreaks. However, most of the information is available after the outbreak itself, and preemptive assessment is far from trivial. Here, we report on an agent-based model developed to investigate such epidemic events in a stylised urban environment. For most diseases, infection of a new individual may occur from casual contact in crowds as well as from repeated interactions with social partners such as work colleagues or family members. Our model therefore accounts for these two phenomena. Given the scale of the system, efficient parallel computing is required. In this presentation, we focus on aspects related to paralllelisation for large networks generation and massively multi-agent simulations.
Resumo:
Communication and information diffusion are typically difficult in situations where centralised structures may become unavailable. In this context, decentralised communication based on epidemic broadcast becomes essential. It can be seen as an opportunity-based flooding for message broadcasting within a swarm of autonomous agents, where each entity tries to share the information it possesses with its neighbours. As an example of applications for such a system, we present simulation results where agents have to coordinate to map an unknown area.
Resumo:
People’s beliefs about where society has come from and where it is going have personal and political consequences. Here, we conduct a detailed investigation of these beliefs through re-analyzing Kashima et al.’s (Study 2, n = 320) data from China, Australia, and Japan. Kashima et al. identified a “folk theory of social change” (FTSC) belief that people in society become more competent over time, but less warm and moral. Using three-mode principal components analysis, an under-utilized analytical method in psychology, we identified two additional narratives: Utopianism/Dystopianism (people becoming generally better or worse over time) and Expansion/Contraction (an increase/decrease in both positive and negative aspects of character over time). Countries differed in endorsement of these three narratives of societal change. Chinese endorsed the FTSC and Utopian narratives more than other countries, Japanese held Dystopian and Contraction beliefs more than other countries, and Australians’ narratives of societal change fell between Chinese and Japanese. Those who believed in greater economic/technological development held stronger FTSC and Expansion/Contraction narratives, but not Utopianism/Dystopianism. By identifying multiple cultural narratives about societal change, this research provides insights into how people across cultures perceive their social world and their visions of the future.
Resumo:
Passenger flow simulations are an important tool for designing and managing airports. This thesis examines the different boarding strategies for the Boeing 777 and Airbus 380 aircraft in order to investigate their current performance and to determine minimum boarding times. The most optimal strategies have been discovered and new strategies that are more efficient are proposed. The methods presented offer reduced aircraft boarding times which plays an important role for reducing the overall aircraft Turn Time for an airline.
Resumo:
Poly sodium acrylate (PSA)-coated Magnetic Nanoparticles (PSA-MNPs) were synthesized as smart osmotic draw agent (SMDA) for water desalination by forward osmosis (FO) process. The PSA-coated MNPs demonstrated significantly higher osmotic pressure (~ 30 fold) as well as high FO water flux (~ 2–3 fold) when compared to their polymer (polyelectrolyte) counterpart, even at a very low concentration of ~ 0.13 wt.% in the draw solution. The PSA polymer chain conformation – coiled to extended – demonstrates a significant impact on the availability of the polymer hydrophilic groups in solution which is the driving force to attain higher osmotic pressure and water flux. When an optimum concentration of the polymer was anchored to a NP surface, the polymer chains assume an extended open conformation making the functional hydrophilic groups available to attract water molecules. This in turn boosts the osmotic pressure and FO water flux of the PSA-MNP draw agents. The low concentration of the PSA-MNP osmotic agent and the associated high water flux enhances the cost-effectiveness of our proposed SMDA system. In addition, easier magnetic separation and regeneration of the SMDA also improves its usability making it efficient, cost-effective and environment-friendly.
Resumo:
As a new research method supplementing the existing qualitative and quantitative approaches, agent-based modelling and simulation (ABMS) may fit well within the entrepreneurship field because the core concepts and basic premises of entrepreneurship coincide with the characteristics of ABMS (McKelvey, 2004; Yang & Chandra, 2013). Agent-based simulation is a simulation method based on agent-based models. The agentbased models are composed of heterogeneous agents and their behavioural rules. By repeatedly carrying out agent-based simulations on a computer, the simulations reproduce each agent’s behaviour, their interactive process, and the emerging macroscopic phenomenon according to the flow of time. Using agent-based simulations, researchers may investigate temporal or dynamic effects of each agent’s behaviours.
Resumo:
Some statistical procedures already available in literature are employed in developing the water quality index, WQI. The nature of complexity and interdependency that occur in physical and chemical processes of water could be easier explained if statistical approaches were applied to water quality indexing. The most popular statistical method used in developing WQI is the principal component analysis (PCA). In literature, the WQI development based on the classical PCA mostly used water quality data that have been transformed and normalized. Outliers may be considered in or eliminated from the analysis. However, the classical mean and sample covariance matrix used in classical PCA methodology is not reliable if the outliers exist in the data. Since the presence of outliers may affect the computation of the principal component, robust principal component analysis, RPCA should be used. Focusing in Langat River, the RPCA-WQI was introduced for the first time in this study to re-calculate the DOE-WQI. Results show that the RPCA-WQI is capable to capture similar distribution in the existing DOE-WQI.
Resumo:
Pattern recognition is a promising approach for the identification of structural damage using measured dynamic data. Much of the research on pattern recognition has employed artificial neural networks (ANNs) and genetic algorithms as systematic ways of matching pattern features. The selection of a damage-sensitive and noise-insensitive pattern feature is important for all structural damage identification methods. Accordingly, a neural networks-based damage detection method using frequency response function (FRF) data is presented in this paper. This method can effectively consider uncertainties of measured data from which training patterns are generated. The proposed method reduces the dimension of the initial FRF data and transforms it into new damage indices and employs an ANN method for the actual damage localization and quantification using recognized damage patterns from the algorithm. In civil engineering applications, the measurement of dynamic response under field conditions always contains noise components from environmental factors. In order to evaluate the performance of the proposed strategy with noise polluted data, noise contaminated measurements are also introduced to the proposed algorithm. ANNs with optimal architecture give minimum training and testing errors and provide precise damage detection results. In order to maximize damage detection results, the optimal architecture of ANN is identified by defining the number of hidden layers and the number of neurons per hidden layer by a trial and error method. In real testing, the number of measurement points and the measurement locations to obtain the structure response are critical for damage detection. Therefore, optimal sensor placement to improve damage identification is also investigated herein. A finite element model of a two storey framed structure is used to train the neural network. It shows accurate performance and gives low error with simulated and noise-contaminated data for single and multiple damage cases. As a result, the proposed method can be used for structural health monitoring and damage detection, particularly for cases where the measurement data is very large. Furthermore, it is suggested that an optimal ANN architecture can detect damage occurrence with good accuracy and can provide damage quantification with reasonable accuracy under varying levels of damage.