859 resultados para electricity distribution networks


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Supply chain operations directly affect service levels. Decision on amendment of facilities is generally decided based on overall cost, leaving out the efficiency of each unit. Decomposing the supply chain superstructure, efficiency analysis of the facilities (warehouses or distribution centers) that serve customers can be easily implemented. With the proposed algorithm, the selection of a facility is based on service level maximization and not just cost minimization as this analysis filters all the feasible solutions utilizing Data Envelopment Analysis (DEA) technique. Through multiple iterations, solutions are filtered via DEA and only the efficient ones are selected leading to cost minimization. In this work, the problem of optimal supply chain networks design is addressed based on a DEA based algorithm. A Branch and Efficiency (B&E) algorithm is deployed for the solution of this problem. Based on this DEA approach, each solution (potentially installed warehouse, plant etc) is treated as a Decision Making Unit, thus is characterized by inputs and outputs. The algorithm through additional constraints named “efficiency cuts”, selects only efficient solutions providing better objective function values. The applicability of the proposed algorithm is demonstrated through illustrative examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent research into resting-state functional magnetic resonance imaging (fMRI) has shown that the brain is very active during rest. This thesis work utilizes blood oxygenation level dependent (BOLD) signals to investigate the spatial and temporal functional network information found within resting-state data, and aims to investigate the feasibility of extracting functional connectivity networks using different methods as well as the dynamic variability within some of the methods. Furthermore, this work looks into producing valid networks using a sparsely-sampled sub-set of the original data.

In this work we utilize four main methods: independent component analysis (ICA), principal component analysis (PCA), correlation, and a point-processing technique. Each method comes with unique assumptions, as well as strengths and limitations into exploring how the resting state components interact in space and time.

Correlation is perhaps the simplest technique. Using this technique, resting-state patterns can be identified based on how similar the time profile is to a seed region’s time profile. However, this method requires a seed region and can only identify one resting state network at a time. This simple correlation technique is able to reproduce the resting state network using subject data from one subject’s scan session as well as with 16 subjects.

Independent component analysis, the second technique, has established software programs that can be used to implement this technique. ICA can extract multiple components from a data set in a single analysis. The disadvantage is that the resting state networks it produces are all independent of each other, making the assumption that the spatial pattern of functional connectivity is the same across all the time points. ICA is successfully able to reproduce resting state connectivity patterns for both one subject and a 16 subject concatenated data set.

Using principal component analysis, the dimensionality of the data is compressed to find the directions in which the variance of the data is most significant. This method utilizes the same basic matrix math as ICA with a few important differences that will be outlined later in this text. Using this method, sometimes different functional connectivity patterns are identifiable but with a large amount of noise and variability.

To begin to investigate the dynamics of the functional connectivity, the correlation technique is used to compare the first and second halves of a scan session. Minor differences are discernable between the correlation results of the scan session halves. Further, a sliding window technique is implemented to study the correlation coefficients through different sizes of correlation windows throughout time. From this technique it is apparent that the correlation level with the seed region is not static throughout the scan length.

The last method introduced, a point processing method, is one of the more novel techniques because it does not require analysis of the continuous time points. Here, network information is extracted based on brief occurrences of high or low amplitude signals within a seed region. Because point processing utilizes less time points from the data, the statistical power of the results is lower. There are also larger variations in DMN patterns between subjects. In addition to boosted computational efficiency, the benefit of using a point-process method is that the patterns produced for different seed regions do not have to be independent of one another.

This work compares four unique methods of identifying functional connectivity patterns. ICA is a technique that is currently used by many scientists studying functional connectivity patterns. The PCA technique is not optimal for the level of noise and the distribution of the data sets. The correlation technique is simple and obtains good results, however a seed region is needed and the method assumes that the DMN regions is correlated throughout the entire scan. Looking at the more dynamic aspects of correlation changing patterns of correlation were evident. The last point-processing method produces a promising results of identifying functional connectivity networks using only low and high amplitude BOLD signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.

In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.

By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.

Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Autonomous Region of Castilla-La Mancha develops from the approval of the Spanish Constitution a whole executive and legislative branch to implement its policies on environmental protection. The new legislation (Law 9/1999, of 26 May) has pursued the conservation and the integral protection of the natural elements of the territory demanding to new criteria as such the environmental quality of ecosystems or the exceptional landscape. The spread and the declaration of new natural spaces have caused a double geographical and territorial model. First, natural spaces located in rural mountainous areas with depopulation and aging problems. And second, natural spaces situated in areas densely populated

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the development of information technology, the theory and methodology of complex network has been introduced to the language research, which transforms the system of language in a complex networks composed of nodes and edges for the quantitative analysis about the language structure. The development of dependency grammar provides theoretical support for the construction of a treebank corpus, making possible a statistic analysis of complex networks. This paper introduces the theory and methodology of the complex network and builds dependency syntactic networks based on the treebank of speeches from the EEE-4 oral test. According to the analysis of the overall characteristics of the networks, including the number of edges, the number of the nodes, the average degree, the average path length, the network centrality and the degree distribution, it aims to find in the networks potential difference and similarity between various grades of speaking performance. Through clustering analysis, this research intends to prove the network parameters’ discriminating feature and provide potential reference for scoring speaking performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work focuses on the study of the circular migration between America and Europe, particularly in the discussion about knowledge transfer and the way that social networks reconfigure the form of information distribution among people, that due to labor and academic issues have left their own country. The main purpose of this work is to study the impact of social media use in migration flows between Mexico and Spain, more specifically the use by Mexican migrants who have moved for  multiple years principally for educational purposes and then have returned to their respective locations in Mexico seeking to integrate themselves into the labor market. Our data collection concentrated exclusively on a group created on Facebook by Mexicans who mostly reside in Barcelona, Spain or wish to travel to the city for economic, educational or tourist reasons.  The results of this research show that while social networks are spaces for exchange and integration, there is a clear tendency by this group to "narrow lines" and to look back to their homeland, slowing the process of opening socially in their new context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study considers a dual-hop cognitive inter-vehicular relay-assisted communication system where all
communication links are non-line of sight ones and their fading is modelled by the double Rayleigh fading distribution.
Road-side relays (or access points) implementing the decode-and-forward relaying protocol are employed and one of
them is selected according to a predetermined policy to enable communication between vehicles. The performance of
the considered cognitive cooperative system is investigated for Kth best partial and full relay selection (RS) as well as
for two distinct fading scenarios. In the first scenario, all channels are double Rayleigh distributed. In the second
scenario, only the secondary source to relay and relay to destination channels are considered to be subject to double
Rayleigh fading whereas, channels between the secondary transmitters and the primary user are modelled by the
Rayleigh distribution. Exact and approximate expressions for the outage probability performance for all considered RS
policies and fading scenarios are presented. In addition to the analytical results, complementary computer simulated
performance evaluation results have been obtained by means of Monte Carlo simulations. The perfect match between
these two sets of results has verified the accuracy of the proposed mathematical analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Market research is often conducted through conventional methods such as surveys, focus groups and interviews. But the drawbacks of these methods are that they can be costly and timeconsuming. This study develops a new method, based on a combination of standard techniques like sentiment analysis and normalisation, to conduct market research in a manner that is free and quick. The method can be used in many application-areas, but this study focuses mainly on the veganism market to identify vegan food preferences in the form of a profile. Several food words are identified, along with their distribution between positive and negative sentiments in the profile. Surprisingly, non-vegan foods such as cheese, cake, milk, pizza and chicken dominate the profile, indicating that there is a significant market for vegan-suitable alternatives for such foods. Meanwhile, vegan-suitable foods such as coconut, potato, blueberries, kale and tofu also make strong appearances in the profile. Validation is performed by using the method on Volkswagen vehicle data to identify positive and negative sentiment across five car models. Some results were found to be consistent with sales figures and expert reviews, while others were inconsistent. The reliability of the method is therefore questionable, so the results should be used with caution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’Internet Physique (IP) est une initiative qui identifie plusieurs symptômes d’inefficacité et non-durabilité des systèmes logistiques et les traite en proposant un nouveau paradigme appelé logistique hyperconnectée. Semblable à l’Internet Digital, qui relie des milliers de réseaux d’ordinateurs personnels et locaux, IP permettra de relier les systèmes logistiques fragmentés actuels. Le but principal étant d’améliorer la performance des systèmes logistiques des points de vue économique, environnemental et social. Se concentrant spécifiquement sur les systèmes de distribution, cette thèse remet en question l’ordre de magnitude du gain de performances en exploitant la distribution hyperconnectée habilitée par IP. Elle concerne également la caractérisation de la planification de la distribution hyperconnectée. Pour répondre à la première question, une approche de la recherche exploratoire basée sur la modélisation de l’optimisation est appliquée, où les systèmes de distribution actuels et potentiels sont modélisés. Ensuite, un ensemble d’échantillons d’affaires réalistes sont créé, et leurs performances économique et environnementale sont évaluées en ciblant de multiples performances sociales. Un cadre conceptuel de planification, incluant la modélisation mathématique est proposé pour l’aide à la prise de décision dans des systèmes de distribution hyperconnectée. Partant des résultats obtenus par notre étude, nous avons démontré qu’un gain substantiel peut être obtenu en migrant vers la distribution hyperconnectée. Nous avons également démontré que l’ampleur du gain varie en fonction des caractéristiques des activités et des performances sociales ciblées. Puisque l’Internet physique est un sujet nouveau, le Chapitre 1 présente brièvement l’IP et hyper connectivité. Le Chapitre 2 discute les fondements, l’objectif et la méthodologie de la recherche. Les défis relevés au cours de cette recherche sont décrits et le type de contributions visés est mis en évidence. Le Chapitre 3 présente les modèles d’optimisation. Influencés par les caractéristiques des systèmes de distribution actuels et potentiels, trois modèles fondés sur le système de distribution sont développés. Chapitre 4 traite la caractérisation des échantillons d’affaires ainsi que la modélisation et le calibrage des paramètres employés dans les modèles. Les résultats de la recherche exploratoire sont présentés au Chapitre 5. Le Chapitre 6 décrit le cadre conceptuel de planification de la distribution hyperconnectée. Le chapitre 7 résume le contenu de la thèse et met en évidence les contributions principales. En outre, il identifie les limites de la recherche et les avenues potentielles de recherches futures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most major cities in the eastern United States have air quality deemed unhealthy by the EPA under a set of regulations known as the National Ambient Air Quality Standards (NAAQS). The worst air quality in Maryland is measured in Edgewood, MD, a small community located along the Chesapeake Bay and generally downwind of Baltimore during hot, summertime days. Direct measurements and numerical simulations were used to investigate how meteorology and chemistry conspire to create adverse levels of photochemical smog especially at this coastal location. Ozone (O3) and oxidized reactive nitrogen (NOy), a family of ozone precursors, were measured over the Chesapeake Bay during a ten day experiment in July 2011 to better understand the formation of ozone over the Bay and its impact on coastal communities such as Edgewood. Ozone over the Bay during the afternoon was 10% to 20% higher than the closest upwind ground sites. A combination of complex boundary layer dynamics, deposition rates, and unaccounted marine emissions play an integral role in the regional maximum of ozone over the Bay. The CAMx regional air quality model was assessed and enhanced through comparison with data from NASA’s 2011 DISCOVER-AQ field campaign. Comparisons show a model overestimate of NOy by +86.2% and a model underestimate of formaldehyde (HCHO) by –28.3%. I present a revised model framework that better captures these observations and the response of ozone to reductions of precursor emissions. Incremental controls on electricity generating stations will produce greater benefits for surface ozone while additional controls on mobile sources may yield less benefit because cars emit less pollution than expected. Model results also indicate that as ozone concentrations improve with decreasing anthropogenic emissions, the photochemical lifetime of tropospheric ozone increases. The lifetime of ozone lengthens because the two primary gas-phase sinks for odd oxygen (Ox ≈ NO2 + O3) – attack by hydroperoxyl radicals (HO2) on ozone and formation of nitrate – weaken with decreasing pollutant emissions. This unintended consequence of air quality regulation causes pollutants to persist longer in the atmosphere, and indicates that pollutant transport between states and countries will likely play a greater role in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The value of integrating a heat storage into a geothermal district heating system has been investigated. The behaviour of the system under a novel operational strategy has been simulated focusing on the energetic, economic and environmental effects of the new strategy of incorporation of the heat storage within the system. A typical geothermal district heating system consists of several production wells, a system of pipelines for the transportation of the hot water to end-users, one or more re-injection wells and peak-up devices (usually fossil-fuel boilers). Traditionally in these systems, the production wells change their production rate throughout the day according to heat demand, and if their maximum capacity is exceeded the peak-up devices are used to meet the balance of the heat demand. In this study, it is proposed to maintain a constant geothermal production and add heat storage into the network. Subsequently, hot water will be stored when heat demand is lower than the production and the stored hot water will be released into the system to cover the peak demands (or part of these). It is not intended to totally phase-out the peak-up devices, but to decrease their use, as these will often be installed anyway for back-up purposes. Both the integration of a heat storage in such a system as well as the novel operational strategy are the main novelties of this thesis. A robust algorithm for the sizing of these systems has been developed. The main inputs are the geothermal production data, the heat demand data throughout one year or more and the topology of the installation. The outputs are the sizing of the whole system, including the necessary number of production wells, the size of the heat storage and the dimensions of the pipelines amongst others. The results provide several useful insights into the initial design considerations for these systems, emphasizing particularly the importance of heat losses. Simulations are carried out for three different cases of sizing of the installation (small, medium and large) to examine the influence of system scale. In the second phase of work, two algorithms are developed which study in detail the operation of the installation throughout a random day and a whole year, respectively. The first algorithm can be a potentially powerful tool for the operators of the installation, who can know a priori how to operate the installation on a random day given the heat demand. The second algorithm is used to obtain the amount of electricity used by the pumps as well as the amount of fuel used by the peak-up boilers over a whole year. These comprise the main operational costs of the installation and are among the main inputs of the third part of the study. In the third part of the study, an integrated energetic, economic and environmental analysis of the studied installation is carried out together with a comparison with the traditional case. The results show that by implementing heat storage under the novel operational strategy, heat is generated more cheaply as all the financial indices improve, more geothermal energy is utilised and less fuel is used in the peak-up boilers, with subsequent environmental benefits, when compared to the traditional case. Furthermore, it is shown that the most attractive case of sizing is the large one, although the addition of the heat storage most greatly impacts the medium case of sizing. In other words, the geothermal component of the installation should be sized as large as possible. This analysis indicates that the proposed solution is beneficial from energetic, economic, and environmental perspectives. Therefore, it can be stated that the aim of this study is achieved in its full potential. Furthermore, the new models for the sizing, operation and economic/energetic/environmental analyses of these kind of systems can be used with few adaptations for real cases, making the practical applicability of this study evident. Having this study as a starting point, further work could include the integration of these systems with end-user demands, further analysis of component parts of the installation (such as the heat exchangers) and the integration of a heat pump to maximise utilisation of geothermal energy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rimicaris exoculata is a deep-sea hydrothermal vent shrimp which enlarged gill chamber houses a complex trophic epibiotic community. Its gut harbours an autochthonous and distinct microbial community. This species dominates hydrothermal ecosystems megafauna along the Mid-Atlantic Ridge, regardless of contrasted geochemical conditions prevailing in them. Here, the resident gut epibiont community at four contrasted hydrothermal vent sites (Rainbow/TAG/Logatchev/Ashadze) was analysed and compiled with previous data to evaluate the possible influence of site location, using 16S rRNA surveys and microscopic observations (TEM, SEM and FISH analyses). Filamentous epibionts inserted between the epithelial cells microvilli were observed on all examined samples. Results confirmed resident gut community affiliation to Deferribacteres, Mollicutes, Epsilonproteobacteria and to a lesser extent Gammaproteobacteria lineages. Still a single Deferribacteres phylotype was retrieved at all sites. Four Mollicutes-related OTUs were distinguished, one being only identified on Rainbow specimens. The topology of ribotypes median-joining networks illustrated a community diversification possibly following demographic expansions, suggesting a more ancient evolutionary history and/or a larger effective population size at Rainbow. Finally, the gill chamber community distribution was also analysed through ribotypes networks based on sequences from R. exoculata collected at Rainbow/Snake Pit/TAG/Logatchev/Ashadze sites. Results allow refining hypotheses on the epibiont role and transmission pathways.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last few years, football entered in a period of accelerated access to large amount of match analysis data. Social networks have been adopted to reveal the structure and organization of the web of interactions, such as the players passing distribution tendencies. In this study we investigated the influence of ball possession characteristics in the competitive success of Spanish La Liga teams. The sample was composed by OPTA passing distribution raw data (n=269,055 passes) obtained from 380 matches involving all the 20 teams of the 2012/2013 season. Then, we generated 760 adjacency matrixes and their corresponding social networks using Node XL software. For each network we calculated three team performance measures to evaluate ball possession tendencies: graph density, average clustering and passing intensity. Three levels of competitive success were determined using two-step cluster analysis based on two input variables: the total points scored by each team and the scored per conceded goals ratio. Our analyses revealed significant differences between competitive performances on all the three team performance measures (p < .001). Bottom-ranked teams had less number of connected players (graph density) and triangulations (average clustering) than intermediate and top-ranked teams. However, all the three clusters diverged in terms of passing intensity, with top-ranked teams having higher number of passes per possession time, than intermediate and bottom-ranked teams. Finally, similarities and dissimilarities in team signatures of play between the 20 teams were displayed using Cohen’s effect size. In sum, findings suggest the competitive performance was influenced by the density and connectivity of the teams, mainly due to the way teams use their possession time to give intensity to their game.