939 resultados para integrated lot sizing and scheduling models
Resumo:
Objective: We used demographic and clinical data to design practical classification models for prediction of neurocognitive impairment (NCI) in people with HIV infection. Methods: The study population comprised 331 HIV-infected patients with available demographic, clinical, and neurocognitive data collected using a comprehensive battery of neuropsychological tests. Classification and regression trees (CART) were developed to btain detailed and reliable models to predict NCI. Following a practical clinical approach, NCI was considered the main variable for study outcomes, and analyses were performed separately in treatment-naïve and treatment-experienced patients. Results: The study sample comprised 52 treatment-naïve and 279 experienced patients. In the first group, the variables identified as better predictors of NCI were CD4 cell count and age (correct classification [CC]: 79.6%, 3 final nodes). In treatment-experienced patients, the variables most closely related to NCI were years of education, nadir CD4 cell count, central nervous system penetration-effectiveness score, age, employment status, and confounding comorbidities (CC: 82.1%, 7 final nodes). In patients with an undetectable viral load and no comorbidities, we obtained a fairly accurate model in which the main variables were nadir CD4 cell count, current CD4 cell count, time on current treatment, and past highest viral load (CC: 88%, 6 final nodes). Conclusion: Practical classification models to predict NCI in HIV infection can be obtained using demographic and clinical variables. An approach based on CART analyses may facilitate screening for HIV-associated neurocognitive disorders and complement clinical information about risk and protective factors for NCI in HIV-infected patients.
Resumo:
AimGlobal environmental changes challenge traditional conservation approaches based on the selection of static protected areas due to their limited ability to deal with the dynamic nature of driving forces relevant to biodiversity. The Natura 2000 network (N2000) constitutes a major milestone in biodiversity conservation in Europe, but the degree to which this static network will be able to reach its long-term conservation objectives raises concern. We assessed the changes in the effectiveness of N2000 in a Mediterranean ecosystem between 2000 and 2050 under different combinations of climate and land cover change scenarios. LocationCatalonia, Spain. MethodsPotential distribution changes of several terrestrial bird species of conservation interest included in the European Union's Birds Directive were predicted within an ensemble-forecasting framework that hierarchically integrated climate change and land cover change scenarios. Land cover changes were simulated using a spatially explicit fire-succession model that integrates fire management strategies and vegetation encroachment after the abandonment of cultivated areas as the main drivers of landscape dynamics in Mediterranean ecosystems. ResultsOur results suggest that the amount of suitable habitats for the target species will strongly decrease both inside and outside N2000. However, the effectiveness of N2000 is expected to increase in the next decades because the amount of suitable habitats is predicted to decrease less inside than outside this network. Main conclusionsSuch predictions shed light on the key role that the current N2000may play in the near future and emphasize the need for an integrative conservation perspective wherein agricultural, forest and fire management policies should be considered to effectively preserve key habitats for threatened birds in fire-prone, highly dynamic Mediterranean ecosystems. Results also show the importance of considering landscape dynamics and the synergies between different driving forces when assessing the long-term effectiveness of protected areas for biodiversity conservation.
Resumo:
The 16p11.2 600 kb BP4-BP5 deletion and duplication syndromes have been associated with developmental delay; autism spectrum disorders; and reciprocal effects on the body mass index, head circumference and brain volumes. Here, we explored these relationships using novel engineered mouse models carrying a deletion (Del/+) or a duplication (Dup/+) of the Sult1a1-Spn region homologous to the human 16p11.2 BP4-BP5 locus. On a C57BL/6N inbred genetic background, Del/+ mice exhibited reduced weight and impaired adipogenesis, hyperactivity, repetitive behaviors, and recognition memory deficits. In contrast, Dup/+ mice showed largely opposite phenotypes. On a F1 C57BL/6N × C3B hybrid genetic background, we also observed alterations in social interaction in the Del/+ and the Dup/+ animals, with other robust phenotypes affecting recognition memory and weight. To explore the dosage effect of the 16p11.2 genes on metabolism, Del/+ and Dup/+ models were challenged with high fat and high sugar diet, which revealed opposite energy imbalance. Transcriptomic analysis revealed that the majority of the genes located in the Sult1a1-Spn region were sensitive to dosage with a major effect on several pathways associated with neurocognitive and metabolic phenotypes. Whereas the behavioral consequence of the 16p11 region genetic dosage was similar in mice and humans with activity and memory alterations, the metabolic defects were opposite: adult Del/+ mice are lean in comparison to the human obese phenotype and the Dup/+ mice are overweight in comparison to the human underweight phenotype. Together, these data indicate that the dosage imbalance at the 16p11.2 locus perturbs the expression of modifiers outside the CNV that can modulate the penetrance, expressivity and direction of effects in both humans and mice.
Resumo:
The general striving to bring down the number of municipal landfills and to increase the reuse and recycling of waste-derived materials across the EU supports the debates concerning the feasibility and rationality of waste management systems. Substantial decrease in the volume and mass of landfill-disposed waste flows can be achieved by directing suitable waste fractions to energy recovery. Global fossil energy supplies are becoming more and more valuable and expensive energy sources for the mankind, and efforts to save fossil fuels have been made. Waste-derived fuels offer one potential partial solution to two different problems. First, waste that cannot be feasibly re-used or recycled is utilized in the energy conversion process according to EU’s Waste Hierarchy. Second, fossil fuels can be saved for other purposes than energy, mainly as transport fuels. This thesis presents the principles of assessing the most sustainable system solution for an integrated municipal waste management and energy system. The assessment process includes: · formation of a SISMan (Simple Integrated System Management) model of an integrated system including mass, energy and financial flows, and · formation of a MEFLO (Mass, Energy, Financial, Legislational, Other decisionsupport data) decision matrix according to the selected decision criteria, including essential and optional decision criteria. The methods are described and theoretical examples of the utilization of the methods are presented in the thesis. The assessment process involves the selection of different system alternatives (process alternatives for treatment of different waste fractions) and comparison between the alternatives. The first of the two novelty values of the utilization of the presented methods is the perspective selected for the formation of the SISMan model. Normally waste management and energy systems are operated separately according to the targets and principles set for each system. In the thesis the waste management and energy supply systems are considered as one larger integrated system with one primary target of serving the customers, i.e. citizens, as efficiently as possible in the spirit of sustainable development, including the following requirements: · reasonable overall costs, including waste management costs and energy costs; · minimum environmental burdens caused by the integrated waste management and energy system, taking into account the requirement above; and · social acceptance of the selected waste treatment and energy production methods. The integrated waste management and energy system is described by forming a SISMan model including three different flows of the system: energy, mass and financial flows. By defining the three types of flows for an integrated system, the selected factor results needed in the decision-making process of the selection of waste management treatment processes for different waste fractions can be calculated. The model and its results form a transparent description of the integrated system under discussion. The MEFLO decision matrix has been formed from the results of the SISMan model, combined with additional data, including e.g. environmental restrictions and regional aspects. System alternatives which do not meet the requirements set by legislation can be deleted from the comparisons before any closer numerical considerations. The second novelty value of this thesis is the three-level ranking method for combining the factor results of the MEFLO decision matrix. As a result of the MEFLO decision matrix, a transparent ranking of different system alternatives, including selection of treatment processes for different waste fractions, is achieved. SISMan and MEFLO are methods meant to be utilized in municipal decision-making processes concerning waste management and energy supply as simple, transparent and easyto- understand tools. The methods can be utilized in the assessment of existing systems, and particularly in the planning processes of future regional integrated systems. The principles of SISMan and MEFLO can be utilized also in other environments, where synergies of integrating two (or more) systems can be obtained. The SISMan flow model and the MEFLO decision matrix can be formed with or without any applicable commercial or free-of-charge tool/software. SISMan and MEFLO are not bound to any libraries or data-bases including process information, such as different emission data libraries utilized in life cycle assessments.
Resumo:
The effect of Heterodera glycines on photosynthesis, leaf area and yield of soybean (Glycine max) was studied in two experiments carried out under greenhouse condition. Soybean seeds were sown in 1.5 l (Experiment 1) or 5.0 l (Experiment 2) clay pots filled with a mixture of field soil + sand (1:1) sterilized with methyl bromide. Eight days after sowing, seedlings were thinned to one per pot, and one day later inoculated with 0; 1.200; 3.600; 10.800; 32.400 or 97.200 J2 juveniles of H. glycines. Experiment 1 was carried out during the first 45 days of the inoculation while Experiment 2 was conducted during the whole cycle of the crop. Measurements of photosynthetic rate, stomatic conductance, chlorophyll fluorescence, leaf color, leaf area, and chlorophyll leaf content were taken at ten-day intervals throughout the experiments. Data on fresh root weight, top dry weight, grain yield, number of eggs/gram of roots, and nematode reproduction factor were obtained at the end of the trials. Each treatment was replicated ten times. There was a marked reduction in both photosynthetic rate and chlorophyll content, as well as an evident yellowing of the leaves of the infected plants. Even at the lowest Pi, the effects of H. glycines on the top dry weight or grain yield were quite severe. Despite the parasitism, soybean yield was highly correlated with the integrated leaf area and, accordingly, the use of this parameter was suggested for the design of potential damage prediction models that include physiological aspects of nematode-diseased plants.
Resumo:
Transitional flow past a three-dimensional circular cylinder is a widely studied phenomenon since this problem is of interest with respect to many technical applications. In the present work, the numerical simulation of flow past a circular cylinder, performed by using a commercial CFD code (ANSYS Fluent 12.1) with large eddy simulation (LES) and RANS (κ - ε and Shear-Stress Transport (SST) κ - ω! model) approaches. The turbulent flow for ReD = 1000 & 3900 is simulated to investigate the force coefficient, Strouhal number, flow separation angle, pressure distribution on cylinder and the complex three dimensional vortex shedding of the cylinder wake region. The numerical results extracted from these simulations have good agreement with the experimental data (Zdravkovich, 1997). Moreover, grid refinement and time-step influence have been examined. Numerical calculations of turbulent cross-flow in a staggered tube bundle continues to attract interest due to its importance in the engineering application as well as the fact that this complex flow represents a challenging problem for CFD. In the present work a time dependent simulation using κ – ε, κ - ω! and SST models are performed in two dimensional for a subcritical flow through a staggered tube bundle. The predicted turbulence statistics (mean and r.m.s velocities) have good agreement with the experimental data (S. Balabani, 1996). Turbulent quantities such as turbulent kinetic energy and dissipation rate are predicted using RANS models and compared with each other. The sensitivity of grid and time-step size have been analyzed. Model constants sensitivity study have been carried out by adopting κ – ε model. It has been observed that model constants are very sensitive to turbulence statistics and turbulent quantities.
Resumo:
The present study was conducted at the Department of Rural Engineering and the Department of Animal Morphology and Physiology of FCAV/Unesp, Jaboticabal, SP, Brazil. The objective was to verify the influence of roof slope, exposure and roofing material on the internal temperature of reduced models of animal production facilities. For the development of the research, 48 reduced and dissemble models with dimensions 1.00 × 1.00 × 0.50 m were used. The roof was shed-type, and the models faced to the North or South directions, with 24 models for each side of exposure. Ceramic, galvanized-steel and fibro tiles were used to build the roofs. Slopes varied between 20, 30, 40 and 50% for the ceramic tile and 10, 30, 40 and 50% for the other two. Inside the models, temperature readings were performed at every hour, for 12 months. The results were evaluated in a general linear model in a nested 3 × 4 × 2 factorial arrangement, in which the effects of roofing material and exposure were nested on the factor Slope. Means were compared by the Tukey test at 5% of probability. After analyzing the data, we observed that with the increase in the slope and exposure to the South, there was a drop in the internal temperature within the model at the geographic coordinates of Jaboticabal city (SP/Brazil).
Resumo:
A web service is a software system that provides a machine-processable interface to the other machines over the network using different Internet protocols. They are being increasingly used in the industry in order to automate different tasks and offer services to a wider audience. The REST architectural style aims at producing scalable and extensible web services using technologies that play well with the existing tools and infrastructure of the web. It provides a uniform set of operation that can be used to invoke a CRUD interface (create, retrieve, update and delete) of a web service. The stateless behavior of the service interface requires that every request to a resource is independent of the previous ones facilitating scalability. Automated systems, e.g., hotel reservation systems, provide advanced scenarios for stateful services that require a certain sequence of requests that must be followed in order to fulfill the service goals. Designing and developing such services for advanced scenarios with REST constraints require rigorous approaches that are capable of creating web services that can be trusted for their behavior. Systems that can be trusted for their behavior can be termed as dependable systems. This thesis presents an integrated design, analysis and validation approach that facilitates the service developer to create dependable and stateful REST web services. The main contribution of this thesis is that we provide a novel model-driven methodology to design behavioral REST web service interfaces and their compositions. The behavioral interfaces provide information on what methods can be invoked on a service and the pre- and post-conditions of these methods. The methodology uses Unified Modeling Language (UML), as the modeling language, which has a wide user base and has mature tools that are continuously evolving. We have used UML class diagram and UML state machine diagram with additional design constraints to provide resource and behavioral models, respectively, for designing REST web service interfaces. These service design models serve as a specification document and the information presented in them have manifold applications. The service design models also contain information about the time and domain requirements of the service that can help in requirement traceability which is an important part of our approach. Requirement traceability helps in capturing faults in the design models and other elements of software development environment by tracing back and forth the unfulfilled requirements of the service. The information about service actors is also included in the design models which is required for authenticating the service requests by authorized actors since not all types of users have access to all the resources. In addition, following our design approach, the service developer can ensure that the designed web service interfaces will be REST compliant. The second contribution of this thesis is consistency analysis of the behavioral REST interfaces. To overcome the inconsistency problem and design errors in our service models, we have used semantic technologies. The REST interfaces are represented in web ontology language, OWL2, that can be part of the semantic web. These interfaces are used with OWL 2 reasoners to check unsatisfiable concepts which result in implementations that fail. This work is fully automated thanks to the implemented translation tool and the existing OWL 2 reasoners. The third contribution of this thesis is the verification and validation of REST web services. We have used model checking techniques with UPPAAL model checker for this purpose. The timed automata of UML based service design models are generated with our transformation tool that are verified for their basic characteristics like deadlock freedom, liveness, reachability and safety. The implementation of a web service is tested using a black-box testing approach. Test cases are generated from the UPPAAL timed automata and using the online testing tool, UPPAAL TRON, the service implementation is validated at runtime against its specifications. Requirement traceability is also addressed in our validation approach with which we can see what service goals are met and trace back the unfulfilled service goals to detect the faults in the design models. A final contribution of the thesis is an implementation of behavioral REST interfaces and service monitors from the service design models. The partial code generation tool creates code skeletons of REST web services with method pre and post-conditions. The preconditions of methods constrain the user to invoke the stateful REST service under the right conditions and the post condition constraint the service developer to implement the right functionality. The details of the methods can be manually inserted by the developer as required. We do not target complete automation because we focus only on the interface aspects of the web service. The applicability of the approach is demonstrated with a pedagogical example of a hotel room booking service and a relatively complex worked example of holiday booking service taken from the industrial context. The former example presents a simple explanation of the approach and the later worked example shows how stateful and timed web services offering complex scenarios and involving other web services can be constructed using our approach.
Resumo:
The aim of this thesis was to research how slurry’s viscosity and rheology affect to pumping in peristaltic hose pump and in eccentric progressive cavity pump. In addition, it was researched the formed pressure pulsation in hose pump. Pressure pulsation was studied by pumping different slurries and by using different pipe materials. Pressure and power curves were determined for both used pumps. It was also determined NPSHR curve for the progressive cavity pump. Literature part of the thesis considered to distribute fluids to different rheology types, as well as theories and models to identify different rheology types. Special attention was paid to non-Newtonian fluids, which were also used in experimental part of this thesis. In addition, the literature part discusses about pumps, parameters for pump sizing, and pressure pulsation in hose pump. Starch, bentonite, and carboxymethyl cellulose slurries were used in the experimental part of this thesis. The slurries were pumped with Flowrox peristaltic hose pump (LPP-T32) and eccentric progressive cavity pump (C10/10). From the each slurry was taken a sample, and the samples were analyzed for concentration, viscosity and rheology type. The used pipe materials in pressure pulsation experiments were steel and elastic, and it was also used a prototype of pulsation dampener. The pulsation experiments indicated that the elastic pipe and the prototype of pulsation dampener attenuated pressure pulsation better than the steel pipe at low pressure levels. The differences between different materials disappeared when pressure level and pump rotation speed increased. In slurry experiments, pulsation was different depending on rheology and viscosity of the slurry. According to experiments, the rheology did not significantly affect to pump power consumption or efficiency.
Resumo:
L’implantation répandue de nouveaux quartiers résidentiels sur le territoire de la périphérie urbaine est en partie responsable de la baisse du couvert végétal et de l’augmentation des surfaces imperméables à grande échelle. Les villes sont maintenant aux prises avec une augmentation constante de la production de ruissellement qu'elles doivent gérer au moyen d’un vaste réseau d’égouts et de canalisations. Des données sur les impacts de ces modèles de quartier résidentiel nous révèlent que cette forme d’habitat provoque la dégradation des milieux naturels et aquatiques. La présente étude vise à mettre à l’épreuve la stratégie d’aménagement de l’Open space design en comparant l’effet de trois situations d’aménagement d’ensembles résidentiels sur le coefficient de ruissellement pondéré (Cp). Les trois situations étudiées sont 1 : le développement actuel tel que conçu par le promoteur, 2 : un scénario de quartier visant la préservation des cours d’eau existants ainsi qu’une réduction des lots et des surfaces imperméables et 3 : un quartier avec des types d’habitation plus denses. Les coefficients pondérés obtenus sont respectivement de 0,50 pour le quartier actuel, de 0,40 pour le scénario 1 et de 0,34 pour le scénario 2. Au terme de cet exercice, il apparaît, d’une part, que la densification du bâti, la nature des surfaces et l’organisation spatiale peuvent concourir à diminuer la production de ruissellement d’un quartier. Cette étude permet de situer l’importance de la gestion du ruissellement dans la planification et l’aménagement du territoire.
Resumo:
Les décisions de localisation sont souvent soumises à des aspects dynamiques comme des changements dans la demande des clients. Pour y répondre, la solution consiste à considérer une flexibilité accrue concernant l’emplacement et la capacité des installations. Même lorsque la demande est prévisible, trouver le planning optimal pour le déploiement et l'ajustement dynamique des capacités reste un défi. Dans cette thèse, nous nous concentrons sur des problèmes de localisation avec périodes multiples, et permettant l'ajustement dynamique des capacités, en particulier ceux avec des structures de coûts complexes. Nous étudions ces problèmes sous différents points de vue de recherche opérationnelle, en présentant et en comparant plusieurs modèles de programmation linéaire en nombres entiers (PLNE), l'évaluation de leur utilisation dans la pratique et en développant des algorithmes de résolution efficaces. Cette thèse est divisée en quatre parties. Tout d’abord, nous présentons le contexte industriel à l’origine de nos travaux: une compagnie forestière qui a besoin de localiser des campements pour accueillir les travailleurs forestiers. Nous présentons un modèle PLNE permettant la construction de nouveaux campements, l’extension, le déplacement et la fermeture temporaire partielle des campements existants. Ce modèle utilise des contraintes de capacité particulières, ainsi qu’une structure de coût à économie d’échelle sur plusieurs niveaux. L'utilité du modèle est évaluée par deux études de cas. La deuxième partie introduit le problème dynamique de localisation avec des capacités modulaires généralisées. Le modèle généralise plusieurs problèmes dynamiques de localisation et fournit de meilleures bornes de la relaxation linéaire que leurs formulations spécialisées. Le modèle peut résoudre des problèmes de localisation où les coûts pour les changements de capacité sont définis pour toutes les paires de niveaux de capacité, comme c'est le cas dans le problème industriel mentionnée ci-dessus. Il est appliqué à trois cas particuliers: l'expansion et la réduction des capacités, la fermeture temporaire des installations, et la combinaison des deux. Nous démontrons des relations de dominance entre notre formulation et les modèles existants pour les cas particuliers. Des expériences de calcul sur un grand nombre d’instances générées aléatoirement jusqu’à 100 installations et 1000 clients, montrent que notre modèle peut obtenir des solutions optimales plus rapidement que les formulations spécialisées existantes. Compte tenu de la complexité des modèles précédents pour les grandes instances, la troisième partie de la thèse propose des heuristiques lagrangiennes. Basées sur les méthodes du sous-gradient et des faisceaux, elles trouvent des solutions de bonne qualité même pour les instances de grande taille comportant jusqu’à 250 installations et 1000 clients. Nous améliorons ensuite la qualité de la solution obtenue en résolvent un modèle PLNE restreint qui tire parti des informations recueillies lors de la résolution du dual lagrangien. Les résultats des calculs montrent que les heuristiques donnent rapidement des solutions de bonne qualité, même pour les instances où les solveurs génériques ne trouvent pas de solutions réalisables. Finalement, nous adaptons les heuristiques précédentes pour résoudre le problème industriel. Deux relaxations différentes sont proposées et comparées. Des extensions des concepts précédents sont présentées afin d'assurer une résolution fiable en un temps raisonnable.
Resumo:
BACKGROUND: A packed bed bioreactor (PBBR) activated with an indigenous nitrifying bacterial consortia was developed and commercialized for rapid establishment of nitrification in brackish water and marine hatchery systems in the tropics. The present study evaluated nitrification in PBBR integrated into a Penaeus monodon recirculating maturation system under different substrate concentrations and flow rates. RESULTS:Instantnitrificationwasobservedafter integration ofPBBRinto thematuration system.TANandNO2-Nconcentrations were always maintained below0.5 mg L−1 during operation. The TANandNO2-N removalwas significant (P < 0.001) in all the six reactor compartments of the PBBR having the substrates at initial concentrations of 2, 5 and 10 mg L−1. The average volumetric TAN removal rates increased with flow rates from 43.51 (250 L h−1) to 130.44 (2500 L h−1) gTAN m−3 day−1 (P < 0.05). FISH analysis of the biofilms after 70 days of operation gave positive results with probes NSO 190 ((β ammonia oxidizers), NsV 443 (Nitrosospira spp.) NEU (halophilic Nitrosomonas), Ntspa 712 (Phylum Nitrospira) indicating stability of the consortia. CONCLUSION: The PBBR integrated into the P. monodon maturation system exhibited significant nitrification upon operation for 70 days as well as at different substrate concentrations and flow rates. This system can easily be integrated into marine and brackish water aquaculture systems, to establish instantaneous nitrification
Resumo:
A modelling study has been undertaken to assess the likely impacts of climate change on water quality across the UK. A range of climate change scenarios have been used to generate future precipitation, evaporation and temperature time series at a range of catchments across the UK. These time series have then been used to drive the Integrated Catchment (INCA) suite of flow, water quality and ecological models to simulate flow, nitrate, ammonia, total and soluble reactive phosphorus, sediments, macrophytes and epiphytes in the Rivers Tamar, Lugg, Tame, Kennet, Tweed and Lambourn. A wide range of responses have been obtained with impacts varying depending on river character, catchment location, flow regime, type of scenario and the time into the future. Essentially upland reaches of river will respond differently to lowland reaches of river, and the responses will vary depending on the water quality parameter of interest.
Resumo:
The prediction of climate variability and change requires the use of a range of simulation models. Multiple climate model simulations are needed to sample the inherent uncertainties in seasonal to centennial prediction. Because climate models are computationally expensive, there is a tradeoff between complexity, spatial resolution, simulation length, and ensemble size. The methods used to assess climate impacts are examined in the context of this trade-off. An emphasis on complexity allows simulation of coupled mechanisms, such as the carbon cycle and feedbacks between agricultural land management and climate. In addition to improving skill, greater spatial resolution increases relevance to regional planning. Greater ensemble size improves the sampling of probabilities. Research from major international projects is used to show the importance of synergistic research efforts. The primary climate impact examined is crop yield, although many of the issues discussed are relevant to hydrology and health modeling. Methods used to bridge the scale gap between climate and crop models are reviewed. Recent advances include large-area crop modeling, quantification of uncertainty in crop yield, and fully integrated crop–climate modeling. The implications of trends in computer power, including supercomputers, are also discussed.
Resumo:
This paper presents the model SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes), which is a vertical (1-D) integrated radiative transfer and energy balance model. The model links visible to thermal infrared radiance spectra (0.4 to 50 μm) as observed above the canopy to the fluxes of water, heat and carbon dioxide, as a function of vegetation structure, and the vertical profiles of temperature. Output of the model is the spectrum of outgoing radiation in the viewing direction and the turbulent heat fluxes, photosynthesis and chlorophyll fluorescence. A special routine is dedicated to the calculation of photosynthesis rate and chlorophyll fluorescence at the leaf level as a function of net radiation and leaf temperature. The fluorescence contributions from individual leaves are integrated over the canopy layer to calculate top-of-canopy fluorescence. The calculation of radiative transfer and the energy balance is fully integrated, allowing for feedback between leaf temperatures, leaf chlorophyll fluorescence and radiative fluxes. Leaf temperatures are calculated on the basis of energy balance closure. Model simulations were evaluated against observations reported in the literature and against data collected during field campaigns. These evaluations showed that SCOPE is able to reproduce realistic radiance spectra, directional radiance and energy balance fluxes. The model may be applied for the design of algorithms for the retrieval of evapotranspiration from optical and thermal earth observation data, for validation of existing methods to monitor vegetation functioning, to help interpret canopy fluorescence measurements, and to study the relationships between synoptic observations with diurnally integrated quantities. The model has been implemented in Matlab and has a modular design, thus allowing for great flexibility and scalability.