852 resultados para LARGE-SCALE SYNTHESIS
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
We review the use of neural field models for modelling the brain at the large scales necessary for interpreting EEG, fMRI, MEG and optical imaging data. Albeit a framework that is limited to coarse-grained or mean-field activity, neural field models provide a framework for unifying data from different imaging modalities. Starting with a description of neural mass models we build to spatially extended cortical models of layered two-dimensional sheets with long range axonal connections mediating synaptic interactions. Reformulations of the fundamental non-local mathematical model in terms of more familiar local differential (brain wave) equations are described. Techniques for the analysis of such models, including how to determine the onset of spatio-temporal pattern forming instabilities, are reviewed. Extensions of the basic formalism to treat refractoriness, adaptive feedback and inhomogeneous connectivity are described along with open challenges for the development of multi-scale models that can integrate macroscopic models at large spatial scales with models at the microscopic scale.
Resumo:
Lesson plan published in Critical Pedagogy Handbook, vol. 2
Resumo:
Understanding the factors that affect seagrass meadows encompassing their entire range of distribution is challenging yet important for their conservation. We model the environmental niche of Cymodocea nodosa using a combination of environmental variables and landscape metrics to examine factors defining its distribution and find suitable habitats for the species. The most relevant environmental variables defining the distribution of C. nodosa were sea surface temperature (SST) and salinity. We found suitable habitats at SST from 5.8 ºC to 26.4 ºC and salinity ranging from 17.5 to 39.3. Optimal values of mean winter wave height ranged between 1.2 m and 1.5 m, while waves higher than 2.5 m seemed to limit the presence of the species. The influence of nutrients and pH, despite having weight on the models, was not so clear in terms of ranges that confine the distribution of the species. Landscape metrics able to capture variation in the coastline enhanced significantly the accuracy of the models, despite the limitations caused by the scale of the study. By contrasting predictive approaches, we defined the variables affecting the distributional areas that seem unsuitable for C. nodosa as well as those suitable habitats not occupied by the species. These findings are encouraging for its use in future studies on climate-related marine range shifts and meadow restoration projects of these fragile ecosystems.
Resumo:
O objetivo deste trabalho foi avaliar o cultivo em larga escala de Ankistrodesmus gracilis e Diaphanososma birgei em laboratório através do estudo da biologia das espécies, composição bioquímica e custo operacional de produção. A. gracilis apresentou um crescimento exponencial até o sexto dia, ao redor de 144 x 10(4) células mL-1. Logo em seguida, sofreu um brusco decréscimo apresentando 90 x 10(4) células mL-1 (oitavo dia). A partir do décimo primeiro dia, as células algais tenderam a crescer novamente, apresentando um máximo de 135 x 10(4) células mL-1 no 17º dia. No cultivo de D. birgei, foi observado o primeiro pico de crescimento no nono dia com 140 x 10² indivíduos L-1, aumentando novamente a partir do décimo segundo dia. A alga clorofícea A. gracilis e o zooplâncton D. birgei possuem aproximadamente 50 e 70% de proteína (PS), respectivamente, com teor de carboidrato acima de 5%. A eletricidade e mão de obra foram os itens mais dispendiosos e, de acordo com os dados obtidos, a temperatura, nutrientes, disponibilidade de luz e manejo do cultivo, foram fatores determinantes sobre a produtividade. Os resultados indicam que o meio NPK (20-5-20) pode ser utilizado diretamente como uma alternativa de cultivo em larga escala, considerando o baixo custo de produção, promovendo adequado crescimento e valor nutricional para A. gracilis e D. birgei.
Resumo:
Performance and economic indicators of a large scale fish farm that produces round fish, located in Mato Grosso State, Brazil, were evaluated. The 130.8 ha-water surface area was distributed in 30 ponds. Average total production costs and the following economic indicators were calculated: gross income (GI), gross margin (GM), gross margin index (GMI), profitability index (PI) and profit (P) for the farm as a whole and for ten ponds individually. Production performance indicators were also obtained, such as: production cycle (PC), apparent feed conversion (FC), average biomass storage (ABS), survival index (SI) and final average weight (FAW). The average costs to produce an average 2.971 kg.ha-1 per year were: R$ 2.43, R$ 0.72 and R$ 3.15 as average variable, fixed and total costs, respectively. Gross margin and profit per year per hectare of water surface were R$ 2,316.91 and R$ 180.98, respectively. The individual evaluation of the ponds showed that the best pond performance was obtained for PI 38%, FC 1.7, ABS 0.980 kg.m-2, TS 56%, FAW 1.873 kg with PC of 12.3 months. The worst PI was obtained for the pond that displayed losses of 138%, FC 2.6, ABS 0.110 kg.m-2, SI 16% and FAW 1.811 kg. However, large scale production of round-fish in farms is economically feasible. The studied farm displays favorable conditions to improve performance and economic indicators, but it is necessary to reproduce the breeding techniques and performance indicators achieved in few ponds to the entire farm.
Resumo:
To tackle the challenges at circuit level and system level VLSI and embedded system design, this dissertation proposes various novel algorithms to explore the efficient solutions. At the circuit level, a new reliability-driven minimum cost Steiner routing and layer assignment scheme is proposed, and the first transceiver insertion algorithmic framework for the optical interconnect is proposed. At the system level, a reliability-driven task scheduling scheme for multiprocessor real-time embedded systems, which optimizes system energy consumption under stochastic fault occurrences, is proposed. The embedded system design is also widely used in the smart home area for improving health, wellbeing and quality of life. The proposed scheduling scheme for multiprocessor embedded systems is hence extended to handle the energy consumption scheduling issues for smart homes. The extended scheme can arrange the household appliances for operation to minimize monetary expense of a customer based on the time-varying pricing model.
Resumo:
This study presents a computational parametric analysis of DME steam reforming in a large scale Circulating Fluidized Bed (CFB) reactor. The Computational Fluid Dynamic (CFD) model used, which is based on Eulerian-Eulerian dispersed flow, has been developed and validated in Part I of this study [1]. The effect of the reactor inlet configuration, gas residence time, inlet temperature and steam to DME ratio on the overall reactor performance and products have all been investigated. The results have shown that the use of double sided solid feeding system remarkable improvement in the flow uniformity, but with limited effect on the reactions and products. The temperature has been found to play a dominant role in increasing the DME conversion and the hydrogen yield. According to the parametric analysis, it is recommended to run the CFB reactor at around 300 °C inlet temperature, 5.5 steam to DME molar ratio, 4 s gas residence time and 37,104 ml gcat -1 h-1 space velocity. At these conditions, the DME conversion and hydrogen molar concentration in the product gas were both found to be around 80%.
Resumo:
Some color centers in diamond can serve as quantum bits which can be manipulated with microwave pulses and read out with laser, even at room temperature. However, the photon collection efficiency of bulk diamond is greatly reduced by refraction at the diamond/air interface. To address this issue, we fabricated arrays of diamond nanostructures, differing in both diameter and top end shape, with HSQ and Cr as the etching mask materials, aiming toward large scale fabrication of single-photon sources with enhanced collection efficiency made of nitrogen vacancy (NV) embedded diamond. With a mixture of O2 and CHF3 gas plasma, diamond pillars with diameters down to 45 nm were obtained. The top end shape evolution has been represented with a simple model. The tests of size dependent single-photon properties confirmed an improved single-photon collection efficiency enhancement, larger than tenfold, and a mild decrease of decoherence time with decreasing pillar diameter was observed as expected. These results provide useful information for future applications of nanostructured diamond as a single-photon source.
Resumo:
Spread of antibiotic resistance among bacteria responsible for nosocomial and community-acquired infections urges for novel therapeutic or prophylactic targets and for innovative pathogen-specific antibacterial compounds. Major challenges are posed by opportunistic pathogens belonging to the low GC% gram-positive bacteria. Among those, Enterococcus faecalis is a leading cause of hospital-acquired infections associated with life-threatening issues and increased hospital costs. To better understand the molecular properties of enterococci that may be required for virulence, and that may explain the emergence of these bacteria in nosocomial infections, we performed the first large-scale functional analysis of E. faecalis V583, the first vancomycin-resistant isolate from a human bloodstream infection. E. faecalis V583 is within the high-risk clonal complex 2 group, which comprises mostly isolates derived from hospital infections worldwide. We conducted broad-range screenings of candidate genes likely involved in host adaptation (e.g., colonization and/or virulence). For this purpose, a library was constructed of targeted insertion mutations in 177 genes encoding putative surface or stress-response factors. Individual mutants were subsequently tested for their i) resistance to oxidative stress, ii) antibiotic resistance, iii) resistance to opsonophagocytosis, iv) adherence to the human colon carcinoma Caco-2 epithelial cells and v) virulence in a surrogate insect model. Our results identified a number of factors that are involved in the interaction between enterococci and their host environments. Their predicted functions highlight the importance of cell envelope glycopolymers in E. faecalis host adaptation. This study provides a valuable genetic database for understanding the steps leading E. faecalis to opportunistic virulence.
Resumo:
Strong convective events can produce extreme precipitation, hail, lightning or gusts, potentially inducing severe socio-economic impacts. These events have a relatively small spatial extension and, in most cases, a short lifetime. In this study, a model is developed for estimating convective extreme events based on large scale conditions. It is shown that strong convective events can be characterized by a Weibull distribution of radar-based rainfall with a low shape and high scale parameter value. A radius of 90km around a station reporting a convective situation turned out to be suitable. A methodology is developed to estimate the Weibull parameters and thus the occurrence probability of convective events from large scale atmospheric instability and enhanced near-surface humidity, which are usually found on a larger scale than the convective event itself. Here, the probability for the occurrence of extreme convective events is estimated from the KO-index indicating the stability, and relative humidity at 1000hPa. Both variables are computed from ERA-Interim reanalysis. In a first version of the methodology, these two variables are applied to estimate the spatial rainfall distribution and to estimate the occurrence of a convective event. The developed method shows significant skill in estimating the occurrence of convective events as observed at synoptic stations, lightning measurements, and severe weather reports. In order to take frontal influences into account, a scheme for the detection of atmospheric fronts is implemented. While generally higher instability is found in the vicinity of fronts, the skill of this approach is largely unchanged. Additional improvements were achieved by a bias-correction and the use of ERA-Interim precipitation. The resulting estimation method is applied to the ERA-Interim period (1979-2014) to establish a ranking of estimated convective extreme events. Two strong estimated events that reveal a frontal influence are analysed in detail. As a second application, the method is applied to GCM-based decadal predictions in the period 1979-2014, which were initialized every year. It is shown that decadal predictive skill for convective event frequencies over Germany is found for the first 3-4 years after the initialization.
Resumo:
Aim Positive regional correlations between biodiversity and human population have been detected for several taxonomic groups and geographical regions. Such correlations could have important conservation implications and have been mainly attributed to ecological factors, with little testing for an artefactual explanation: more populated regions may show higher biodiversity because they are more thoroughly surveyed. We tested the hypothesis that the correlation between people and herptile diversity in Europe is influenced by survey effort
Resumo:
The increasing integration of renewable energies in the electricity grid contributes considerably to achieve the European Union goals on energy and Greenhouse Gases (GHG) emissions reduction. However, it also brings problems to grid management. Large scale energy storage can provide the means for a better integration of the renewable energy sources, for balancing supply and demand, to increase energy security, to enhance a better management of the grid and also to converge towards a low carbon economy. Geological formations have the potential to store large volumes of fluids with minimal impact to environment and society. One of the ways to ensure a large scale energy storage is to use the storage capacity in geological reservoir. In fact, there are several viable technologies for underground energy storage, as well as several types of underground reservoirs that can be considered. The geological energy storage technologies considered in this research were: Underground Gas Storage (UGS), Hydrogen Storage (HS), Compressed Air Energy Storage (CAES), Underground Pumped Hydro Storage (UPHS) and Thermal Energy Storage (TES). For these different types of underground energy storage technologies there are several types of geological reservoirs that can be suitable, namely: depleted hydrocarbon reservoirs, aquifers, salt formations and caverns, engineered rock caverns and abandoned mines. Specific site screening criteria are applicable to each of these reservoir types and technologies, which determines the viability of the reservoir itself, and of the technology for any particular site. This paper presents a review of the criteria applied in the scope of the Portuguese contribution to the EU funded project ESTMAP – Energy Storage Mapping and Planning.