908 resultados para Model Based Development
Resumo:
The aim of this technical report is to quantify alternative energy demand and supply scenarios for ten southern and eastern Mediterranean countries up to 2030. The report presents the model-based results of four alternative scenarios that are broadly in line with the MEDPRO scenario specifications on regional integration and cooperation with the EU. The report analyses the main implications of the scenarios in the following areas: • final energy demand by sector (industry, households, services, agriculture and transport); • the evolution of the power generation mix, the development of renewable energy sources and electricity exports to the EU; • primary energy production and the balance of trade for hydrocarbons; • energy-related CO2 emissions; and • power generation costs.
Resumo:
The crisis in Russia’s financial market, which started in mid-December 2014, has exposed the real scale of the economic problems that have been growing in Russia for several years. Over the course of the last year, Russia’s basic macroeconomic indicators deteriorated considerably, the confidence of its citizens in the state and in institutions in charge of economic stability declined, the government and business elites became increasingly dissatisfied with the policy direction adopted by the Kremlin, and fighting started over the shrinking resources. According to forecasts obtained from both governmental and expert communities, Russia will fall into recession in 2015. The present situation is the result of the simultaneous occurrence of three unfavourable trends: the fact that the Russian economy’s resource-based development model has reached the limits of its potential due to structural weaknesses, the dramatic decline in oil prices in the second half of 2014, and the impact of Western economic sanctions. Given the inefficiency of existing systemic mechanisms, in the coming years the Russian leadership will likely resort to ad hoc solutions such as switching to a more interventionist “manual override” mode in governing the state. In the short term, this will allow them to neutralise the most urgent problems, although an effective development policy will be impossible without a fundamental change of the political and economic system in Russia.
Resumo:
Prior research on citizen support for European integration does not consider how individuals’ evaluations of European nationalities are associated with support. This paper fills this gap by developing a political cohesion model based on social identity theory. I claim that the probability of supporting integration increases with greater levels of trust in fellow Europeans, which assumes to reflect their positive images. Also, trust in eastern European Union nationalities has the highest impact on the probability for support, followed by trust in the southern nationalities, and then northern nationalities due to the eastern and southern nationalities relatively lower economic development. Controlling for various factors, the ordered logistic regression analysis of the European Election Study (2004) data support these claims.
Resumo:
Since Vladimir Putin returned to the Kremlin as President in May 2012, the Russian system of power has become increasingly authoritarian, and has evolved towards a model of extremely personalised rule that derives its legitimacy from aggressive decisions in internal and foreign policy, escalates the use of force, and interferes increasingly assertively in the spheres of politics, history, ideology or even public morals. Putin’s power now rests on charismatic legitimacy to a much greater extent than it did during his first two presidential terms; currently the President is presented not only as an effective leader, but also as the sole guarantor of Russia’s stability and integrity. After 15 years of Putin’s rule, Russia’s economic model based on revenue from energy resources has exhausted its potential, and the country has no new model that could ensure continued growth for the economy. The Putinist system of power is starting to show symptoms of agony – it has been unable to generate new development projects, and has been compensating for its ongoing degradation by escalating repression and the use of force. However, this is not equivalent to its imminent collapse.
Resumo:
In a previous paper, Hoornaert et al. (Powder Technol. 96 (1998); 116-128) presented data from granulation experiments performed in a 50 L Lodige high shear mixer. In this study that same data was simulated with a population balance model. Based on an analysis of the experimental data, the granulation process was divided into three separate stages: nucleation, induction, and coalescence growth. These three stages were then simulated separately, with promising results. it is possible to derive a kernel that fit both the induction and the coalescence growth stage. Modeling the nucleation stage proved to be more challenging due to the complex mechanism of nucleus formation. From this work some recommendations are made for the improvement of this type of model.
Resumo:
We examine the event statistics obtained from two differing simplified models for earthquake faults. The first model is a reproduction of the Block-Slider model of Carlson et al. (1991), a model often employed in seismicity studies. The second model is an elastodynamic fault model based upon the Lattice Solid Model (LSM) of Mora and Place (1994). We performed simulations in which the fault length was varied in each model and generated synthetic catalogs of event sizes and times. From these catalogs, we constructed interval event size distributions and inter-event time distributions. The larger, localised events in the Block-Slider model displayed the same scaling behaviour as events in the LSM however the distribution of inter-event times was markedly different. The analysis of both event size and inter-event time statistics is an effective method for comparative studies of differing simplified models for earthquake faults.
Resumo:
Community-based coastal resource management has been widely applied within the Philippines. However, small-scale community-based reserves are often inefficient owing to management inadequacies arising because of a lack of local support or enforcement or poor design. Because there are many potential pitfalls during the establishment of even small community-based reserves, it is important for coastal managers, communities, and facilitating institutions to have access to a summary of the key factors for success. Reviewing relevant literature, we present a framework of lessons learned during the establishment of protected areas, mainly in the Philippines. The framework contains summary guidance on the importance of (1) an island location, (2) small community population size, (3) minimal effect of land-based development, (4) application of a bottom-up approach, (5) an external facilitating institution, (6) acquisition of title, (7) use of a scientific information database, (8) stakeholder involvement, (9) the establishment of legislation, (10) community empowerment, (11) alternative livelihood schemes, (12) surveillance, (13) tangible management results, (14) continued involvement of external groups after reserve establishment, and (15) small-scale project expansion. These framework components guided the establishment of a community-based protected area at Danjugan Island, Negros Occidental, Philippines. This case study showed that the framework was a useful guide that led to establishing and implementing a community-based marine reserve. Evaluation of the reserve using standard criteria developed for the Philippines shows that the Danjugan Island protected area can be considered successful and sustainable. At Danjugan Island, all of the lessons synthesized in the framework were important and should be considered elsewhere, even for relatively small projects. As shown in previous projects in the Philippines, local involvement and stewardship of the protected area appeared particularly important for its successful implementation. The involvement of external organizations also seemed to have a key role in the success of the Danjugan Island project by guiding local decision-makers in the sociobiological principles of establishing protected areas. However, the relative importance of each component of the framework will vary between coastal management initiatives both within the Philippines and across the wider Asian region.
Resumo:
Based on clues from epidemiology, low prenatal vitamin D has been proposed as a candidate risk factor for schizophrenia. Recent animal experiments have demonstrated that transient prenatal vitamin D deficiency is associated with persistent alterations in brain morphology and neurotrophin expression. In order to explore the utility of the vitamin D animal model of schizophrenia, we examined different types of learning and memory in adult rats exposed to transient prenatal vitamin D deficiency. Compared to control animals, the prenatally deplete animals had a significant impairment of latent inhibition, a feature often associated with schizophrenia. In addition, the deplete group was (a) significantly impaired on hole board habituation and (b) significantly better at maintaining previously learnt rules of brightness discrimination in a Y-chamber. In contrast, the prenatally deplete animals showed no impairment on the spatial learning task in the radial maze, nor on two-way active avoidance learning in the shuttle-box. The results indicate that transient prenatal vitamin D depletion in the rat is associated with subtle and discrete alterations in learning and memory. The behavioural phenotype associated with this animal model may provide insights into the neurobiological correlates of the cognitive impairments of schizophrenia. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The aim of this report is to describe the use of WinBUGS for two datasets that arise from typical population pharmacokinetic studies. The first dataset relates to gentamicin concentration-time data that arose as part of routine clinical care of 55 neonates. The second dataset incorporated data from 96 patients receiving enoxaparin. Both datasets were originally analyzed by using NONMEM. In the first instance, although NONMEM provided reasonable estimates of the fixed effects parameters it was unable to provide satisfactory estimates of the between-subject variance. In the second instance, the use of NONMEM resulted in the development of a successful model, albeit with limited available information on the between-subject variability of the pharmacokinetic parameters. WinBUGS was used to develop a model for both of these datasets. Model comparison for the enoxaparin dataset was performed by using the posterior distribution of the log-likelihood and a posterior predictive check. The use of WinBUGS supported the same structural models tried in NONMEM. For the gentamicin dataset a one-compartment model with intravenous infusion was developed, and the population parameters including the full between-subject variance-covariance matrix were available. Analysis of the enoxaparin dataset supported a two compartment model as superior to the one-compartment model, based on the posterior predictive check. Again, the full between-subject variance-covariance matrix parameters were available. Fully Bayesian approaches using MCMC methods, via WinBUGS, can offer added value for analysis of population pharmacokinetic data.
Resumo:
Motivation: The clustering of gene profiles across some experimental conditions of interest contributes significantly to the elucidation of unknown gene function, the validation of gene discoveries and the interpretation of biological processes. However, this clustering problem is not straightforward as the profiles of the genes are not all independently distributed and the expression levels may have been obtained from an experimental design involving replicated arrays. Ignoring the dependence between the gene profiles and the structure of the replicated data can result in important sources of variability in the experiments being overlooked in the analysis, with the consequent possibility of misleading inferences being made. We propose a random-effects model that provides a unified approach to the clustering of genes with correlated expression levels measured in a wide variety of experimental situations. Our model is an extension of the normal mixture model to account for the correlations between the gene profiles and to enable covariate information to be incorporated into the clustering process. Hence the model is applicable to longitudinal studies with or without replication, for example, time-course experiments by using time as a covariate, and to cross-sectional experiments by using categorical covariates to represent the different experimental classes. Results: We show that our random-effects model can be fitted by maximum likelihood via the EM algorithm for which the E(expectation) and M(maximization) steps can be implemented in closed form. Hence our model can be fitted deterministically without the need for time-consuming Monte Carlo approximations. The effectiveness of our model-based procedure for the clustering of correlated gene profiles is demonstrated on three real datasets, representing typical microarray experimental designs, covering time-course, repeated-measurement and cross-sectional data. In these examples, relevant clusters of the genes are obtained, which are supported by existing gene-function annotation. A synthetic dataset is considered too.
Resumo:
Ecologists and economists both use models to help develop strategies for biodiversity management. The practical use of disciplinary models, however, can be limited because ecological models tend not to address the socioeconomic dimension of biodiversity management, whereas economic models tend to neglect the ecological dimension. Given these shortcomings of disciplinary models, there is a necessity to combine ecological and economic knowledge into ecological-economic models. It is insufficient if scientists work separately in their own disciplines and combine their knowledge only when it comes to formulating management recommendations. Such an approach does not capture feedback loops between the ecological and the socioeconomic systems. Furthermore, each discipline poses the management problem in its own way and comes up with its own most appropriate solution. These disciplinary solutions, however are likely to be so different that a combined solution considering aspects of both disciplines cannot be found. Preconditions for a successful model-based integration of ecology and economics include (1) an in-depth knowledge of the two disciplines, (2) the adequate identification and framing of the problem to be investigated, and (3) a common understanding between economists and ecologists of modeling and scale. To further advance ecological-economic modeling the development of common benchmarks, quality controls, and refereeing standards for ecological-economic models is desirable.
Resumo:
The ‘leading coordinate’ approach to computing an approximate reaction pathway, with subsequent determination of the true minimum energy profile, is applied to a two-proton chain transfer model based on the chromophore and its surrounding moieties within the green fluorescent protein (GFP). Using an ab initio quantum chemical method, a number of different relaxed energy profiles are found for several plausible guesses at leading coordinates. The results obtained for different trial leading coordinates are rationalized through the calculation of a two-dimensional relaxed potential energy surface (PES) for the system. Analysis of the 2-D relaxed PES reveals that two of the trial pathways are entirely spurious, while two others contain useful information and can be used to furnish starting points for successful saddle-point searches. Implications for selection of trial leading coordinates in this class of proton chain transfer reactions are discussed, and a simple diagnostic function is proposed for revealing whether or not a relaxed pathway based on a trial leading coordinate is likely to furnish useful information.
Resumo:
Many developing south-east Asian governments are not capturing full rent from domestic forest logging operations. Such rent losses are commonly related to institutional failures, where informal institutions tend to dominate the control of forestry activity in spite of weakly enforced regulations. Our model is an attempt to add a new dimension to thinking about deforestation. We present a simple conceptual model, based on individual decisions rather than social or forest planning, which includes the human dynamics of participation in informal activity and the relatively slower ecological dynamics of changes in forest resources. We demonstrate how incumbent informal logging operations can be persistent, and that any spending aimed at replacing the informal institutions can only be successful if it pushes institutional settings past some threshold. (C) 2006 Elsevier B.V. All rights reserved.
Resumo:
La riduzione dei consumi di combustibili fossili e lo sviluppo di tecnologie per il risparmio energetico sono una questione di centrale importanza sia per l’industria che per la ricerca, a causa dei drastici effetti che le emissioni di inquinanti antropogenici stanno avendo sull’ambiente. Mentre un crescente numero di normative e regolamenti vengono emessi per far fronte a questi problemi, la necessità di sviluppare tecnologie a basse emissioni sta guidando la ricerca in numerosi settori industriali. Nonostante la realizzazione di fonti energetiche rinnovabili sia vista come la soluzione più promettente nel lungo periodo, un’efficace e completa integrazione di tali tecnologie risulta ad oggi impraticabile, a causa sia di vincoli tecnici che della vastità della quota di energia prodotta, attualmente soddisfatta da fonti fossili, che le tecnologie alternative dovrebbero andare a coprire. L’ottimizzazione della produzione e della gestione energetica d’altra parte, associata allo sviluppo di tecnologie per la riduzione dei consumi energetici, rappresenta una soluzione adeguata al problema, che può al contempo essere integrata all’interno di orizzonti temporali più brevi. L’obiettivo della presente tesi è quello di investigare, sviluppare ed applicare un insieme di strumenti numerici per ottimizzare la progettazione e la gestione di processi energetici che possa essere usato per ottenere una riduzione dei consumi di combustibile ed un’ottimizzazione dell’efficienza energetica. La metodologia sviluppata si appoggia su un approccio basato sulla modellazione numerica dei sistemi, che sfrutta le capacità predittive, derivanti da una rappresentazione matematica dei processi, per sviluppare delle strategie di ottimizzazione degli stessi, a fronte di condizioni di impiego realistiche. Nello sviluppo di queste procedure, particolare enfasi viene data alla necessità di derivare delle corrette strategie di gestione, che tengano conto delle dinamiche degli impianti analizzati, per poter ottenere le migliori prestazioni durante l’effettiva fase operativa. Durante lo sviluppo della tesi il problema dell’ottimizzazione energetica è stato affrontato in riferimento a tre diverse applicazioni tecnologiche. Nella prima di queste è stato considerato un impianto multi-fonte per la soddisfazione della domanda energetica di un edificio ad uso commerciale. Poiché tale sistema utilizza una serie di molteplici tecnologie per la produzione dell’energia termica ed elettrica richiesta dalle utenze, è necessario identificare la corretta strategia di ripartizione dei carichi, in grado di garantire la massima efficienza energetica dell’impianto. Basandosi su un modello semplificato dell’impianto, il problema è stato risolto applicando un algoritmo di Programmazione Dinamica deterministico, e i risultati ottenuti sono stati comparati con quelli derivanti dall’adozione di una più semplice strategia a regole, provando in tal modo i vantaggi connessi all’adozione di una strategia di controllo ottimale. Nella seconda applicazione è stata investigata la progettazione di una soluzione ibrida per il recupero energetico da uno scavatore idraulico. Poiché diversi layout tecnologici per implementare questa soluzione possono essere concepiti e l’introduzione di componenti aggiuntivi necessita di un corretto dimensionamento, è necessario lo sviluppo di una metodologia che permetta di valutare le massime prestazioni ottenibili da ognuna di tali soluzioni alternative. Il confronto fra i diversi layout è stato perciò condotto sulla base delle prestazioni energetiche del macchinario durante un ciclo di scavo standardizzato, stimate grazie all’ausilio di un dettagliato modello dell’impianto. Poiché l’aggiunta di dispositivi per il recupero energetico introduce gradi di libertà addizionali nel sistema, è stato inoltre necessario determinare la strategia di controllo ottimale dei medesimi, al fine di poter valutare le massime prestazioni ottenibili da ciascun layout. Tale problema è stato di nuovo risolto grazie all’ausilio di un algoritmo di Programmazione Dinamica, che sfrutta un modello semplificato del sistema, ideato per lo scopo. Una volta che le prestazioni ottimali per ogni soluzione progettuale sono state determinate, è stato possibile effettuare un equo confronto fra le diverse alternative. Nella terza ed ultima applicazione è stato analizzato un impianto a ciclo Rankine organico (ORC) per il recupero di cascami termici dai gas di scarico di autovetture. Nonostante gli impianti ORC siano potenzialmente in grado di produrre rilevanti incrementi nel risparmio di combustibile di un veicolo, è necessario per il loro corretto funzionamento lo sviluppo di complesse strategie di controllo, che siano in grado di far fronte alla variabilità della fonte di calore per il processo; inoltre, contemporaneamente alla massimizzazione dei risparmi di combustibile, il sistema deve essere mantenuto in condizioni di funzionamento sicure. Per far fronte al problema, un robusto ed efficace modello dell’impianto è stato realizzato, basandosi sulla Moving Boundary Methodology, per la simulazione delle dinamiche di cambio di fase del fluido organico e la stima delle prestazioni dell’impianto. Tale modello è stato in seguito utilizzato per progettare un controllore predittivo (MPC) in grado di stimare i parametri di controllo ottimali per la gestione del sistema durante il funzionamento transitorio. Per la soluzione del corrispondente problema di ottimizzazione dinamica non lineare, un algoritmo basato sulla Particle Swarm Optimization è stato sviluppato. I risultati ottenuti con l’adozione di tale controllore sono stati confrontati con quelli ottenibili da un classico controllore proporzionale integrale (PI), mostrando nuovamente i vantaggi, da un punto di vista energetico, derivanti dall’adozione di una strategia di controllo ottima.