857 resultados para complexity of agents
Resumo:
There is a lack of knowledge base in relation to experiences gained and lessons learnt from previously executed National Health Service (NHS) infrastructure projects in the UK. This is in part a feature of one-off construction projects, which typify healthcare infrastructure, and in part due to the absence of a suitable method for conveying such information. The complexity of infrastructure delivery process in the NHS makes the construction of healthcare buildings a formidable task. This is particularly the case for the NHS trusts who have little or no experience of construction projects. To facilitate understanding a most important aspect of the delivery process, which is the preparation of a capital investment proposal; steps taken in developing the business case for an NHS healthcare facility are examined. The context for such examination is provided by the planning process of a healthcare project, studied retrospectively. The process is analysed using a social science based method called ‘building stories’, developed at the University of California-Berkeley. By applying this method, stories or narratives are constructed around the data captured on the case study. The findings indicate that the business case process may be used to justify, rather than identify, trusts’ requirements. The study is useful for UK public sector clients as well as consultants and professionals who aim to participate in the delivery of healthcare infrastructure projects in the UK.
Resumo:
The competitiveness of the construction industry is an important issue for many countries as the industry makes up a substantial part of their GDP – about 8% in the UK. A number of competitiveness studies have been undertaken at company, industry and national levels. However, there has been little focus on sustainable competitiveness and the many factors that are involved. This paper addresses that need by investigating what construction industry experts consider to be the most important factors of construction industry competitiveness. It does so by conducting a Delphi survey among industry experts in Finland, Sweden and the UK. A list of 158 factors was compiled from competitiveness reports by institutions such as World Economic Forum and International Institute of Management Development, as well as from explorative workshops in the countries involved in the study. For each of the countries, experts with different perspectives of the industry, including, consultants, contractors and clients, were asked to select their 30 most influential factors. They then ranked their chosen factors in order of importance for the competitiveness of their construction industry. The findings after the first round of the Delphi process underline the complexity of the term competitiveness and the wide range of factors that are considered important contributors to competitiveness. The results also indicate that what are considered to be the most important factors of competitiveness is likely to differ from one country to another.
Resumo:
Since 1995 the Directive 93/43/EEC prescribes the application of HACCP principles in food production. However, despite the major importance of food safety, there is a fundamental lack of information on the economic impact of this directive. This project aims to study costs and benefits of HACCP, including the impact of HACCP on public health. Due to the complexity of the issue, we propose to start with a pilot study, limited to dairy and meat products industry in NL, UK and I. Information will be obtained at two levels: production chain and public health. Integration of these results will result in recommendations at the levels of EU member states and industry for regulation and supporting measures, and in recommendations for an effective and efficient approach for a more comprehensive project on costs and benefits of HACCP, covering major parts of the food industry in EU member states.
Resumo:
A fermentation system was designed to model the human colonic microflora in vitro. The system provided a framework of mucin beads to encourage the adhesion of bacteria, which was encased within a dialysis membrane. The void between the beads was inoculated with faeces from human donors. Water and metabolites were removed from the fermentation by osmosis using a solution of polyethylene glycol (PEG). The system was concomitantly inoculated alongside a conventional single-stage chemostat. Three fermentations were carried out using inocula from three healthy human donors. Bacterial populations from the chemostat and biofilm system were enumerated using fluorescence in situ hybridization. The culture fluid was also analysed for its short-chain fatty acid (SCFA) content. A higher cell density was achieved in the biofilm fermentation system (taking into account the contribution made by the bead-associated bacteria) as compared with the chemostat, owing to the removal of water and metabolites. Evaluation of the bacterial populations revealed that the biofilm system was able to support two distinct groups of bacteria: bacteria growing in association with the mucin beads and planktonic bacteria in the culture fluid. Furthermore, distinct differences were observed between populations in the biofilm fermenter system and the chemostat, with the former supporting higher populations of clostridia and Escherichia coli. SCFA levels were lower in the biofilm system than in the chemostat, as in the former they were removed via the osmotic effect of the PEG. These experiments demonstrated the potential usefulness of the biofilm system for investigating the complexity of the human colonic microflora and the contribution made by sessile bacterial populations.
Resumo:
The popularity of wireless local area networks (WLANs) has resulted in their dense deployments around the world. While this increases capacity and coverage, the problem of increased interference can severely degrade the performance of WLANs. However, the impact of interference on throughput in dense WLANs with multiple access points (APs) has had very limited prior research. This is believed to be due to 1) the inaccurate assumption that throughput is always a monotonically decreasing function of interference and 2) the prohibitively high complexity of an accurate analytical model. In this work, firstly we provide a useful classification of commonly found interference scenarios. Secondly, we investigate the impact of interference on throughput for each class based on an approach that determines the possibility of parallel transmissions. Extensive packet-level simulations using OPNET have been performed to support the observations made. Interestingly, results have shown that in some topologies, increased interference can lead to higher throughput and vice versa.
Resumo:
We provide a system identification framework for the analysis of THz-transient data. The subspace identification algorithm for both deterministic and stochastic systems is used to model the time-domain responses of structures under broadband excitation. Structures with additional time delays can be modelled within the state-space framework using additional state variables. We compare the numerical stability of the commonly used least-squares ARX models to that of the subspace N4SID algorithm by using examples of fourth-order and eighth-order systems under pulse and chirp excitation conditions. These models correspond to structures having two and four modes simultaneously propagating respectively. We show that chirp excitation combined with the subspace identification algorithm can provide a better identification of the underlying mode dynamics than the ARX model does as the complexity of the system increases. The use of an identified state-space model for mode demixing, upon transformation to a decoupled realization form is illustrated. Applications of state-space models and the N4SID algorithm to THz transient spectroscopy as well as to optical systems are highlighted.
Resumo:
We introduce and describe the Multiple Gravity Assist problem, a global optimisation problem that is of great interest in the design of spacecraft and their trajectories. We discuss its formalization and we show, in one particular problem instance, the performance of selected state of the art heuristic global optimisation algorithms. A deterministic search space pruning algorithm is then developed and its polynomial time and space complexity derived. The algorithm is shown to achieve search space reductions of greater than six orders of magnitude, thus reducing significantly the complexity of the subsequent optimisation.
Resumo:
Background: As people age, language-processing ability changes. While several factors modify discourse comprehension ability in older adults, syntactic complexity of auditory discourse has received scant attention. This is despite the widely researched domain of syntactic processing of single sentences in older adults. Aims: The aims of this study were to investigate the ability of healthy older adults to understand stories that differed in syntactic complexity, and its relation to working memory. Methods & Procedures: A total of 51 healthy adults (divided into three age groups) took part. They listened to brief stories (syntactically simple and syntactically complex) and had to respond to false/true comprehension probes following each story. Working memory capacity (digit span, forward and backward) was also measured. Outcomes & Results: Differences were found in the ability of healthy older adults to understand simple and complex discourse. The complex discourse in particular was more sensitive in discerning age-related language patterns. Only the complex discourse task correlated moderately with age. There was no correlation between age and simple discourse. As far as working memory is concerned, moderate correlations were found between working memory and complex discourse. Education did not correlate with discourse, neither simple, nor complex. Conclusions: Older adults may be less efficient in forming syntactically complex representations and this may be influenced by limitations in working memory.
Resumo:
Overseas trained teachers (OTTs) have grown in numbers during the past decade, particularly in London and the South East of England. In this recruitment explosion many OTTs have experienced difficulties. In professional literature as well as press coverage OTTs often become part of a deficit discourse. A small-scale pilot investigation of OTT experience has begun to suggest why OTTs have been successful as well as the principal challenges they have faced. An important factor in their success was felt to be the quality of support in school from others on the staff. Major challenges included the complexity of the primary curriculum. The argument that globalisation leads to brain-drain may be exaggerated. Suggestions for further research are made, which might indicate the positive benefits OTTs can bring to a school.
Resumo:
Research on the production of relative clauses (RCs) has shown that in English, although children start using intransitive RCs at an earlier age, more complex, bi-propositional object RCs appear later (Hamburger & Crain, 1982; Diessel and Tomasello, 2005), and children use resumptive pronouns both in acceptable and unacceptable ways (McKee, McDaniel, & Snedeker, 1998; McKee & McDaniel, 2001). To date, it is unclear whether or not the same picture emerges in Turkish, a language with an SOV word-order and overt case marking. Some studies suggested that subject RCs are more frequent in adults and children (Slobin, 1986) and yield a better performance than object RCs (Özcan, 1996), but others reported the opposite pattern (Ekmekçi, 1990). Our study addresses this issue in Turkish children and adults, and uses participants’ errors to account for the emerging asymmetry between subject and object RCs. 37 5-to-8 year old monolingual Turkish children and 23 adult controls participated in a novel elicitation task involving cards, each consisting of four different pictures (see Figure 1). There were two sets of cards, one for the participant and one for the researcher. The former had animals with accessories (e.g., a hat) whereas the latter had no accessories. Participants were instructed to hold their card without showing it to the researcher and describe the animals with particular accessories. This prompted the use of subject and object RCs. The researcher had to identify the animals in her card (see Figure 2). A preliminary repeated measures ANOVA with the factor Group (pre-school, primary-school children) showed no differences between the groups in the use of RCs (p>.1), who were therefore collapsed into one for further analyses. A repeated measures ANOVA with the factors Group (children, adults) and RC-Type (Subject, Object) showed that children used fewer RCs than adults (F(1,58)=7.54, p<.01), and both groups used fewer object than subject RCs (F(1,58)=22.46, p<.001), but there was no Group by RC-Type interaction (see Figure 3). A similar ANOVA on the rate of grammatical RCs showed a main effect of Group (F(1,58)=77.25, p<.001), a main effect of RC-Type (F(1,58)=66.33, p<.001), and an interaction of Group by RC-Type (F(1,58)=64.6, p<.001) (see Figure 4). Children made more errors than adults in object RCs (F(1,58)=87.01, p<.001), and children made more errors in object compared to subject RCs (F(1,36)=106.35, p<.001), but adults did not show this asymmetry. The error analysis revealed that children systematically avoided the object-relativizing morpheme –DIK, which requires possessive agreement with the genitive-marked subject. They also used resumptive pronouns and resumptive full-DPs in the extraction site similarly to English children (see Figure 5). These findings are in line with Slobin (1986) and Özcan (1996). Children’s errors suggest that they avoid morphosyntactic complexity of object RCs and try to preserve the canonical word order by inserting resumptive pronouns in the extraction site. Finally, cross-linguistic similarity in the acquisition of RCs in typologically different languages suggests a higher accessibility of subject RCs both at the structural (Keenan and Comrie, 1977) and conceptual level (Bock and Warren, 1986).
Resumo:
We present a novel kinetic multi-layer model that explicitly resolves mass transport and chemical reaction at the surface and in the bulk of aerosol particles (KM-SUB). The model is based on the PRA framework of gas-particle interactions (Poschl-Rudich-Ammann, 2007), and it includes reversible adsorption, surface reactions and surface-bulk exchange as well as bulk diffusion and reaction. Unlike earlier models, KM-SUB does not require simplifying assumptions about steady-state conditions and radial mixing. The temporal evolution and concentration profiles of volatile and non-volatile species at the gas-particle interface and in the particle bulk can be modeled along with surface concentrations and gas uptake coefficients. In this study we explore and exemplify the effects of bulk diffusion on the rate of reactive gas uptake for a simple reference system, the ozonolysis of oleic acid particles, in comparison to experimental data and earlier model studies. We demonstrate how KM-SUB can be used to interpret and analyze experimental data from laboratory studies, and how the results can be extrapolated to atmospheric conditions. In particular, we show how interfacial and bulk transport, i.e., surface accommodation, bulk accommodation and bulk diffusion, influence the kinetics of the chemical reaction. Sensitivity studies suggest that in fine air particulate matter oleic acid and compounds with similar reactivity against ozone (carbon-carbon double bonds) can reach chemical lifetimes of many hours only if they are embedded in a (semi-)solid matrix with very low diffusion coefficients (< 10(-10) cm(2) s(-1)). Depending on the complexity of the investigated system, unlimited numbers of volatile and non-volatile species and chemical reactions can be flexibly added and treated with KM-SUB. We propose and intend to pursue the application of KM-SUB as a basis for the development of a detailed master mechanism of aerosol chemistry as well as for the derivation of simplified but realistic parameterizations for large-scale atmospheric and climate models.
Resumo:
We present a novel kinetic multi-layer model that explicitly resolves mass transport and chemical reaction at the surface and in the bulk of aerosol particles (KM-SUB). The model is based on the PRA framework of gas–particle interactions (P¨oschl et al., 5 2007), and it includes reversible adsorption, surface reactions and surface-bulk exchange as well as bulk diffusion and reaction. Unlike earlier models, KM-SUB does not require simplifying assumptions about steady-state conditions and radial mixing. The temporal evolution and concentration profiles of volatile and non-volatile species at the gas-particle interface and in the particle bulk can be modeled along with surface 10 concentrations and gas uptake coefficients. In this study we explore and exemplify the effects of bulk diffusion on the rate of reactive gas uptake for a simple reference system, the ozonolysis of oleic acid particles, in comparison to experimental data and earlier model studies. We demonstrate how KM-SUB can be used to interpret and analyze experimental data from laboratory stud15 ies, and how the results can be extrapolated to atmospheric conditions. In particular, we show how interfacial transport and bulk transport, i.e., surface accommodation, bulk accommodation and bulk diffusion, influence the kinetics of the chemical reaction. Sensitivity studies suggest that in fine air particulate matter oleic acid and compounds with similar reactivity against ozone (C=C double bonds) can reach chemical lifetimes of 20 multiple hours only if they are embedded in a (semi-)solid matrix with very low diffusion coefficients (10−10 cm2 s−1). Depending on the complexity of the investigated system, unlimited numbers of volatile and non-volatile species and chemical reactions can be flexibly added and treated with KM-SUB. We propose and intend to pursue the application of KM-SUB 25 as a basis for the development of a detailed master mechanism of aerosol chemistry as well as for the derivation of simplified but realistic parameterizations for large-scale atmospheric and climate models.
Resumo:
Current mathematical models in building research have been limited in most studies to linear dynamics systems. A literature review of past studies investigating chaos theory approaches in building simulation models suggests that as a basis chaos model is valid and can handle the increasingly complexity of building systems that have dynamic interactions among all the distributed and hierarchical systems on the one hand, and the environment and occupants on the other. The review also identifies the paucity of literature and the need for a suitable methodology of linking chaos theory to mathematical models in building design and management studies. This study is broadly divided into two parts and presented in two companion papers. Part (I) reviews the current state of the chaos theory models as a starting point for establishing theories that can be effectively applied to building simulation models. Part (II) develops conceptual frameworks that approach current model methodologies from the theoretical perspective provided by chaos theory, with a focus on the key concepts and their potential to help to better understand the nonlinear dynamic nature of built environment systems. Case studies are also presented which demonstrate the potential usefulness of chaos theory driven models in a wide variety of leading areas of building research. This study distills the fundamental properties and the most relevant characteristics of chaos theory essential to building simulation scientists, initiates a dialogue and builds bridges between scientists and engineers, and stimulates future research about a wide range of issues on building environmental systems.
Resumo:
Current mathematical models in building research have been limited in most studies to linear dynamics systems. A literature review of past studies investigating chaos theory approaches in building simulation models suggests that as a basis chaos model is valid and can handle the increasing complexity of building systems that have dynamic interactions among all the distributed and hierarchical systems on the one hand, and the environment and occupants on the other. The review also identifies the paucity of literature and the need for a suitable methodology of linking chaos theory to mathematical models in building design and management studies. This study is broadly divided into two parts and presented in two companion papers. Part (I), published in the previous issue, reviews the current state of the chaos theory models as a starting point for establishing theories that can be effectively applied to building simulation models. Part (II) develop conceptual frameworks that approach current model methodologies from the theoretical perspective provided by chaos theory, with a focus on the key concepts and their potential to help to better understand the nonlinear dynamic nature of built environment systems. Case studies are also presented which demonstrate the potential usefulness of chaos theory driven models in a wide variety of leading areas of building research. This study distills the fundamental properties and the most relevant characteristics of chaos theory essential to (1) building simulation scientists and designers (2) initiating a dialogue between scientists and engineers, and (3) stimulating future research on a wide range of issues involved in designing and managing building environmental systems.
Resumo:
This paper provides an overview of the reduction targets that Ireland has set in the context of decarbonising their electricity generation through the use of renewables. The main challenges associated with integrating high levels (>20% of installed capacity) of non-dispatchable renewable generation are identified. The rising complexity of the challenge as renewable penetration levels increase is highlighted. A list of relevant research questions is then proposed, and an overview is given into the previous work that has gone into answering some of them. In particular, studies into the Irish energy market are identified, the current knowledge gap is described, and areas of necessary future research are suggested