867 resultados para Multi-scale modelling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The value of integrating a heat storage into a geothermal district heating system has been investigated. The behaviour of the system under a novel operational strategy has been simulated focusing on the energetic, economic and environmental effects of the new strategy of incorporation of the heat storage within the system. A typical geothermal district heating system consists of several production wells, a system of pipelines for the transportation of the hot water to end-users, one or more re-injection wells and peak-up devices (usually fossil-fuel boilers). Traditionally in these systems, the production wells change their production rate throughout the day according to heat demand, and if their maximum capacity is exceeded the peak-up devices are used to meet the balance of the heat demand. In this study, it is proposed to maintain a constant geothermal production and add heat storage into the network. Subsequently, hot water will be stored when heat demand is lower than the production and the stored hot water will be released into the system to cover the peak demands (or part of these). It is not intended to totally phase-out the peak-up devices, but to decrease their use, as these will often be installed anyway for back-up purposes. Both the integration of a heat storage in such a system as well as the novel operational strategy are the main novelties of this thesis. A robust algorithm for the sizing of these systems has been developed. The main inputs are the geothermal production data, the heat demand data throughout one year or more and the topology of the installation. The outputs are the sizing of the whole system, including the necessary number of production wells, the size of the heat storage and the dimensions of the pipelines amongst others. The results provide several useful insights into the initial design considerations for these systems, emphasizing particularly the importance of heat losses. Simulations are carried out for three different cases of sizing of the installation (small, medium and large) to examine the influence of system scale. In the second phase of work, two algorithms are developed which study in detail the operation of the installation throughout a random day and a whole year, respectively. The first algorithm can be a potentially powerful tool for the operators of the installation, who can know a priori how to operate the installation on a random day given the heat demand. The second algorithm is used to obtain the amount of electricity used by the pumps as well as the amount of fuel used by the peak-up boilers over a whole year. These comprise the main operational costs of the installation and are among the main inputs of the third part of the study. In the third part of the study, an integrated energetic, economic and environmental analysis of the studied installation is carried out together with a comparison with the traditional case. The results show that by implementing heat storage under the novel operational strategy, heat is generated more cheaply as all the financial indices improve, more geothermal energy is utilised and less fuel is used in the peak-up boilers, with subsequent environmental benefits, when compared to the traditional case. Furthermore, it is shown that the most attractive case of sizing is the large one, although the addition of the heat storage most greatly impacts the medium case of sizing. In other words, the geothermal component of the installation should be sized as large as possible. This analysis indicates that the proposed solution is beneficial from energetic, economic, and environmental perspectives. Therefore, it can be stated that the aim of this study is achieved in its full potential. Furthermore, the new models for the sizing, operation and economic/energetic/environmental analyses of these kind of systems can be used with few adaptations for real cases, making the practical applicability of this study evident. Having this study as a starting point, further work could include the integration of these systems with end-user demands, further analysis of component parts of the installation (such as the heat exchangers) and the integration of a heat pump to maximise utilisation of geothermal energy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Topography is often thought as exclusively linked to mountain ranges formed by plates collision. It is now, however, known that apart from compression, uplift and denudation of rocks may be triggered by rifting, like it happens at elevated passive margins, and away from plate boundaries by both intra-plate stress causing reactivation of older structures, and by epeirogenic movements driven by mantle dynamics and initiating long-wavelength uplift. In the Cenozoic, central west Britain and other parts of the North Atlantic margins experienced multiple episodes of rock uplift and denudation that have been variable both at spatial and temporal scales. The origin of topography in central west Britain is enigmatic, and because of its location, it may be related to any of the processes mentioned above. In this study, three low temperature thermochronometers, the apatite fission track (AFT) and apatite and zircon (U-Th-Sm)/He (AHe and ZHe, respectively) methods were used to establish the rock cooling history from 200◦C to 30◦C. The samples were collected from the intrusive rocks in the high elevation, high relief regions of the Lake District (NW England), southern Scotland and northern Wales. AFT ages from the region are youngest (55–70 Ma) in the Lake District and increase northwards into southern Scotland and southwards in north Wales (>200 Ma). AHe and ZHe ages show no systematic pattern; the former range from 50 to 80 Ma and the latter tend to record the post-emplacement cooling of the intrusions (200–400 Ma). The complex, multi-thermochronometric inverse modelling suggests a ubiquitous, rapid Late Cretaceous/early Palaeogene cooling event that is particularly marked in Lake District and Criffell. The timing and rate of cooling in southern Scotland and in northern Wales is poorly resolved as the amount of cooling was less than 60◦C. The Lake District plutons were at >110◦C prior to the early Palaeogene; cooling due to a combined effect of high heat flow, from the heat producing granite batholith, and the blanketing effect of the overlying low conductivity Late Mesozoic limestones and mudstones. Modelling of the heat transfer suggests that this combination produced an elevated geothermal gradient within the sedimentary rocks (50–70◦C/km) that was about two times higher than at the present day. Inverse modelling of the AFT and AHe data taking the crustal structure into consideration suggests that denudation was the highest, 2.0–2.5 km, in the coastal areas of the Lake District and southern Scotland, gradually decreasing to less than 1 km in the northern Southern Uplands and northern Wales. Both the rift-related uplift and the intra-plate compression poorly correlate with the timing, location and spatial distribution of the early Palaeogene denudation. The pattern of early Palaeogene denudation correlates with the thickness of magmatic underplating, if the changes of mean topography, Late Cretaceous water depth and eroded rock density are taken into consideration. However, the uplift due to underplating alone cannot fully justify the total early Palaeogene denudation. The amount that is not ex- plained by underplating is, however, roughly spatially constant across the study area and can be referred to the transient thermal uplift induced by the mantle plume arrival. No other mechanisms are required to explain the observed pattern of denudation. The onset of denudation across the region is not uniform. Denudation started at 70–75 Ma in the central part of the Lake District whereas the coastal areas the rapid erosion appears to have initiated later (65–60 Ma). This is ~10 Ma earlier than the first vol- canic manifestation of the proto-Iceland plume and favours the hypothesis of the short period of plume incubation below the lithosphere before the volcanism. In most of the localities, the rocks had cooled to temperatures lower than 30◦C by the end of the Palaeogene, suggesting that the total Neogene denudation was, at a maximum, several hundreds of metres. Rapid cooling in the last 3 million years is resolved in some places in southern Scotland, where it could be explained by glacial erosion and post-glacial isostatic uplift.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim The spread of non-indigenous species in marine ecosystems world-wide is one of today's most serious environmental concerns. Using mechanistic modelling, we investigated how global change relates to the invasion of European coasts by a non-native marine invertebrate, the Pacific oyster Crassostrea gigas. Location Bourgneuf Bay on the French Atlantic coast was considered as the northern boundary of C. gigas expansion at the time of its introduction to Europe in the 1970s. From this latitudinal reference, variations in the spatial distribution of the C. gigas reproductive niche were analysed along the north-western European coast from Gibraltar to Norway. Methods The effects of environmental variations on C. gigas physiology and phenology were studied using a bioenergetics model based on Dynamic Energy Budget theory. The model was forced with environmental time series including in situ phytoplankton data, and satellite data of sea surface temperature and suspended particulate matter concentration. Results Simulation outputs were successfully validated against in situ oyster growth data. In Bourgneuf Bay, the rise in seawater temperature and phytoplankton concentration has increased C. gigas reproductive effort and led to precocious spawning periods since the 1960s. At the European scale, seawater temperature increase caused a drastic northward shift (1400 km within 30 years) in the C. gigas reproductive niche and optimal thermal conditions for early life stage development. Main conclusions We demonstrated that the poleward expansion of the invasive species C. gigas is related to global warming and increase in phytoplankton abundance. The combination of mechanistic bioenergetics modelling with in situ and satellite environmental data is a valuable framework for ecosystem studies. It offers a generic approach to analyse historical geographical shifts and to predict the biogeographical changes expected to occur in a climate-changing world.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: Because there is scientific evidence that an appropriate intake of dietary fibre should be part of a healthy diet, given its importance in promoting health, the present study aimed to develop and validate an instrument to evaluate the knowledge of the general population about dietary fibres. Study design: The present study was a cross sectional study. Methods: The methodological study of psychometric validation was conducted with 6010 participants, residing in ten countries from 3 continents. The instrument is a questionnaire of self-response, aimed at collecting information on knowledge about food fibres. For exploratory factor analysis (EFA) was chosen the analysis of the main components using varimax orthogonal rotation and eigenvalues greater than 1. In confirmatory factor analysis by structural equation modelling (SEM) was considered the covariance matrix and adopted the Maximum Likelihood Estimation algorithm for parameter estimation. Results: Exploratory factor analysis retained two factors. The first was called Dietary Fibre and Promotion of Health (DFPH) and included 7 questions that explained 33.94 % of total variance ( = 0.852). The second was named Sources of Dietary Fibre (SDF) and included 4 questions that explained 22.46% of total variance ( = 0.786). The model was tested by SEM giving a final solution with four questions in each factor. This model showed a very good fit in practically all the indexes considered, except for the ratio 2/df. The values of average variance extracted (0.458 and 0.483) demonstrate the existence of convergent validity; the results also prove the existence of discriminant validity of the factors (r2 = 0.028) and finally good internal consistency was confirmed by the values of composite reliability (0.854 and 0.787). Conclusions: This study allowed validating the KADF scale, increasing the degree of confidence in the information obtained through this instrument in this and in future studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When designing systems that are complex, dynamic and stochastic in nature, simulation is generally recognised as one of the best design support technologies, and a valuable aid in the strategic and tactical decision making process. A simulation model consists of a set of rules that define how a system changes over time, given its current state. Unlike analytical models, a simulation model is not solved but is run and the changes of system states can be observed at any point in time. This provides an insight into system dynamics rather than just predicting the output of a system based on specific inputs. Simulation is not a decision making tool but a decision support tool, allowing better informed decisions to be made. Due to the complexity of the real world, a simulation model can only be an approximation of the target system. The essence of the art of simulation modelling is abstraction and simplification. Only those characteristics that are important for the study and analysis of the target system should be included in the simulation model. The purpose of simulation is either to better understand the operation of a target system, or to make predictions about a target system’s performance. It can be viewed as an artificial white-room which allows one to gain insight but also to test new theories and practices without disrupting the daily routine of the focal organisation. What you can expect to gain from a simulation study is very well summarised by FIRMA (2000). His idea is that if the theory that has been framed about the target system holds, and if this theory has been adequately translated into a computer model this would allow you to answer some of the following questions: · Which kind of behaviour can be expected under arbitrarily given parameter combinations and initial conditions? · Which kind of behaviour will a given target system display in the future? · Which state will the target system reach in the future? The required accuracy of the simulation model very much depends on the type of question one is trying to answer. In order to be able to respond to the first question the simulation model needs to be an explanatory model. This requires less data accuracy. In comparison, the simulation model required to answer the latter two questions has to be predictive in nature and therefore needs highly accurate input data to achieve credible outputs. These predictions involve showing trends, rather than giving precise and absolute predictions of the target system performance. The numerical results of a simulation experiment on their own are most often not very useful and need to be rigorously analysed with statistical methods. These results then need to be considered in the context of the real system and interpreted in a qualitative way to make meaningful recommendations or compile best practice guidelines. One needs a good working knowledge about the behaviour of the real system to be able to fully exploit the understanding gained from simulation experiments. The goal of this chapter is to brace the newcomer to the topic of what we think is a valuable asset to the toolset of analysts and decision makers. We will give you a summary of information we have gathered from the literature and of the experiences that we have made first hand during the last five years, whilst obtaining a better understanding of this exciting technology. We hope that this will help you to avoid some pitfalls that we have unwittingly encountered. Section 2 is an introduction to the different types of simulation used in Operational Research and Management Science with a clear focus on agent-based simulation. In Section 3 we outline the theoretical background of multi-agent systems and their elements to prepare you for Section 4 where we discuss how to develop a multi-agent simulation model. Section 5 outlines a simple example of a multi-agent system. Section 6 provides a collection of resources for further studies and finally in Section 7 we will conclude the chapter with a short summary.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic analysis of human behaviour in large collections of videos is gaining interest, even more so with the advent of file sharing sites such as YouTube. However, challenges still exist owing to several factors such as inter- and intra-class variations, cluttered backgrounds, occlusion, camera motion, scale, view and illumination changes. This research focuses on modelling human behaviour for action recognition in videos. The developed techniques are validated on large scale benchmark datasets and applied on real-world scenarios such as soccer videos. Three major contributions are made. The first contribution is in the area of proper choice of a feature representation for videos. This involved a study of state-of-the-art techniques for action recognition, feature extraction processing and dimensional reduction techniques so as to yield the best performance with optimal computational requirements. Secondly, temporal modelling of human behaviour is performed. This involved frequency analysis and temporal integration of local information in the video frames to yield a temporal feature vector. Current practices mostly average the frame information over an entire video and neglect the temporal order. Lastly, the proposed framework is applied and further adapted to real-world scenario such as soccer videos. A dataset consisting of video sequences depicting events of players falling is created from actual match data to this end and used to experimentally evaluate the proposed framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Depression is a major health problem worldwide and the majority of patients presenting with depressive symptoms are managed in primary care. Current approaches for assessing depressive symptoms in primary care are not accurate in predicting future clinical outcomes, which may potentially lead to over or under treatment. The Allostatic Load (AL) theory suggests that by measuring multi-system biomarker levels as a proxy of measuring multi-system physiological dysregulation, it is possible to identify individuals at risk of having adverse health outcomes at a prodromal stage. Allostatic Index (AI) score, calculated by applying statistical formulations to different multi-system biomarkers, have been associated with depressive symptoms. Aims and Objectives: To test the hypothesis, that a combination of allostatic load (AL) biomarkers will form a predictive algorithm in defining clinically meaningful outcomes in a population of patients presenting with depressive symptoms. The key objectives were: 1. To explore the relationship between various allostatic load biomarkers and prevalence of depressive symptoms in patients, especially in patients diagnosed with three common cardiometabolic diseases (Coronary Heart Disease (CHD), Diabetes and Stroke). 2 To explore whether allostatic load biomarkers predict clinical outcomes in patients with depressive symptoms, especially in patients with three common cardiometabolic diseases (CHD, Diabetes and Stroke). 3 To develop a predictive tool to identify individuals with depressive symptoms at highest risk of adverse clinical outcomes. Methods: Datasets used: ‘DepChron’ was a dataset of 35,537 patients with existing cardiometabolic disease collected as a part of routine clinical practice. ‘Psobid’ was a research data source containing health related information from 666 participants recruited from the general population. The clinical outcomes for 3 both datasets were studied using electronic data linkage to hospital and mortality health records, undertaken by Information Services Division, Scotland. Cross-sectional associations between allostatic load biomarkers calculated at baseline, with clinical severity of depression assessed by a symptom score, were assessed using logistic and linear regression models in both datasets. Cox’s proportional hazards survival analysis models were used to assess the relationship of allostatic load biomarkers at baseline and the risk of adverse physical health outcomes at follow-up, in patients with depressive symptoms. The possibility of interaction between depressive symptoms and allostatic load biomarkers in risk prediction of adverse clinical outcomes was studied using the analysis of variance (ANOVA) test. Finally, the value of constructing a risk scoring scale using patient demographics and allostatic load biomarkers for predicting adverse outcomes in depressed patients was investigated using clinical risk prediction modelling and Area Under Curve (AUC) statistics. Key Results: Literature Review Findings. The literature review showed that twelve blood based peripheral biomarkers were statistically significant in predicting six different clinical outcomes in participants with depressive symptoms. Outcomes related to both mental health (depressive symptoms) and physical health were statistically associated with pre-treatment levels of peripheral biomarkers; however only two studies investigated outcomes related to physical health. Cross-sectional Analysis Findings: In DepChron, dysregulation of individual allostatic biomarkers (mainly cardiometabolic) were found to have a non-linear association with increased probability of co-morbid depressive symptoms (as assessed by Hospital Anxiety and Depression Score HADS-D≥8). A composite AI score constructed using five biomarkers did not lead to any improvement in the observed strength of the association. In Psobid, BMI was found to have a significant cross-sectional association with the probability of depressive symptoms (assessed by General Health Questionnaire GHQ-28≥5). BMI, triglycerides, highly sensitive C - reactive 4 protein (CRP) and High Density Lipoprotein-HDL cholesterol were found to have a significant cross-sectional relationship with the continuous measure of GHQ-28. A composite AI score constructed using 12 biomarkers did not show a significant association with depressive symptoms among Psobid participants. Longitudinal Analysis Findings: In DepChron, three clinical outcomes were studied over four years: all-cause death, all-cause hospital admissions and composite major adverse cardiovascular outcome-MACE (cardiovascular death or admission due to MI/stroke/HF). Presence of depressive symptoms and composite AI score calculated using mainly peripheral cardiometabolic biomarkers was found to have a significant association with all three clinical outcomes over the following four years in DepChron patients. There was no evidence of an interaction between AI score and presence of depressive symptoms in risk prediction of any of the three clinical outcomes. There was a statistically significant interaction noted between SBP and depressive symptoms in risk prediction of major adverse cardiovascular outcome, and also between HbA1c and depressive symptoms in risk prediction of all-cause mortality for patients with diabetes. In Psobid, depressive symptoms (assessed by GHQ-28≥5) did not have a statistically significant association with any of the four outcomes under study at seven years: all cause death, all cause hospital admission, MACE and incidence of new cancer. A composite AI score at baseline had a significant association with the risk of MACE at seven years, after adjusting for confounders. A continuous measure of IL-6 observed at baseline had a significant association with the risk of three clinical outcomes- all-cause mortality, all-cause hospital admissions and major adverse cardiovascular event. Raised total cholesterol at baseline was associated with lower risk of all-cause death at seven years while raised waist hip ratio- WHR at baseline was associated with higher risk of MACE at seven years among Psobid participants. There was no significant interaction between depressive symptoms and peripheral biomarkers (individual or combined) in risk prediction of any of the four clinical outcomes under consideration. Risk Scoring System Development: In the DepChron cohort, a scoring system was constructed based on eight baseline demographic and clinical variables to predict the risk of MACE over four years. The AUC value for the risk scoring system was modest at 56.7% (95% CI 55.6 to 57.5%). In Psobid, it was not possible to perform this analysis due to the low event rate observed for the clinical outcomes. Conclusion: Individual peripheral biomarkers were found to have a cross-sectional association with depressive symptoms both in patients with cardiometabolic disease and middle-aged participants recruited from the general population. AI score calculated with different statistical formulations was of no greater benefit in predicting concurrent depressive symptoms or clinical outcomes at follow-up, over and above its individual constituent biomarkers, in either patient cohort. SBP had a significant interaction with depressive symptoms in predicting cardiovascular events in patients with cardiometabolic disease; HbA1c had a significant interaction with depressive symptoms in predicting all-cause mortality in patients with diabetes. Peripheral biomarkers may have a role in predicting clinical outcomes in patients with depressive symptoms, especially for those with existing cardiometabolic disease, and this merits further investigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A general framework for an ecological model of the English Channel was described in the first of this pair of papers. In this study, it was used to investigate the sensitivity of the model to various factors: model structure, parameter values, boundary conditions and forcing variables. These sensitivity analyses show how important quota formulation for phytoplankton growth is, particularly for growth of dinoflagellates. They also stress the major influence of variables and parameters related to nitrogen. The role played by rivers and particularly the river Seine was investigated. Their influence on global English Channel phytoplanktonic production seems to be relatively low, even though nutrient inputs determine the intensity of blooms in the Bay of Seine. The geographical position of the river Seine's estuary makes it important in fluxes through the Straits of Dover. Finally, the multi-annual study highlights the general stability of the English Channel ecosystem. These global considerations are discussed and further improvements to the model are proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part 18: Optimization in Collaborative Networks

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta investigação tem como objectivo a construção e validação de uma Escala MultiFactorial de Motivação no Trabalho para a população portuguesa. A ausência de instrumentos para medir várias dimensões da motivação levou ao desenvolvimento e elaboração desta escala de motivação. A escala integra 28 itens decorrentes de uma pesquisa teórica que contempla algumas teorias da motivação. Participaram nos estudos de validação da escala 444 colaboradores de empresas de novas tecnologias de ambos os sexos, com idades compreendidas entre os 19 e os 34 anos. A escala apresenta bons índices de consistência interna (valores entre 0.72 e 0.84) e uma análise factorial que revelou a existência de uma estrutura tetrafactorial com 49% de variância explicada: motivação com a organização do trabalho, motivação com realização e poder, motivação de desempenho e motivação associada ao envolvimento. Esperase ainda que novos estudos possam ser desenvolvidos a partir desta mesma escala.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-agent systems offer a new and exciting way of understanding the world of work. We apply agent-based modeling and simulation to investigate a set of problems in a retail context. Specifically, we are working to understand the relationship between people management practices on the shop-floor and retail performance. Despite the fact we are working within a relatively novel and complex domain, it is clear that using an agent-based approach offers great potential for improving organizational capabilities in the future. Our multi-disciplinary research team has worked closely with one of the UK’s top ten retailers to collect data and build an understanding of shop-floor operations and the key actors in a department (customers, staff, and managers). Based on this case study we have built and tested our first version of a retail branch agent-based simulation model where we have focused on how we can simulate the effects of people management practices on customer satisfaction and sales. In our experiments we have looked at employee development and cashier empowerment as two examples of shop floor management practices. In this paper we describe the underlying conceptual ideas and the features of our simulation model. We present a selection of experiments we have conducted in order to validate our simulation model and to show its potential for answering “what-if” questions in a retail context. We also introduce a novel performance measure which we have created to quantify customers’ satisfaction with service, based on their individual shopping experiences.