987 resultados para Growing Pyramidal Networks
Resumo:
A method is presented to model server unreliability in closed queuing networks. Breakdowns and repairs of servers, assumed to be time-dependent, are modeled using virtual customers and virtual servers in the system. The problem is thus converted into a closed queue with all reliable servers and preemptive resume priority centers. Several recent preemptive priority approximations and an approximation of the one proposed are used in the analysis. This method has approximately the same computational requirements as that of mean-value analysis for a network of identical dimensions and is therefore very efficient
Resumo:
This paper deals with the development and performance evaluation of three modified versions of a scheme proposed for medium access control in local area networks. The original scheme implements a collision-free and fair medium arbitration by using a control wire in conjunction with a data bus. The modifications suggested in this paper are intended to realize the multiple priority function in local area networks.
Resumo:
We study wireless multihop energy harvesting sensor networks employed for random field estimation. The sensors sense the random field and generate data that is to be sent to a fusion node for estimation. Each sensor has an energy harvesting source and can operate in two modes: Wake and Sleep. We consider the problem of obtaining jointly optimal power control, routing and scheduling policies that ensure a fair utilization of network resources. This problem has a high computational complexity. Therefore, we develop a computationally efficient suboptimal approach to obtain good solutions to this problem. We study the optimal solution and performance of the suboptimal approach through some numerical examples.
Resumo:
The objective of the present paper is to select the best compromise irrigation planning strategy for the case study of Jayakwadi irrigation project, Maharashtra, India. Four-phase methodology is employed. In phase 1, separate linear programming (LP) models are formulated for the three objectives, namely. net economic benefits, agricultural production and labour employment. In phase 2, nondominated (compromise) irrigation planning strategies are generated using the constraint method of multiobjective optimisation. In phase 3, Kohonen neural networks (KNN) based classification algorithm is employed to sort nondominated irrigation planning strategies into smaller groups. In phase 4, multicriterion analysis (MCA) technique, namely, Compromise Programming is applied to rank strategies obtained from phase 3. It is concluded that the above integrated methodology is effective for modeling multiobjective irrigation planning problems and the present approach can be extended to situations where number of irrigation planning strategies are even large in number. (c) 2004 Elsevier Ltd. All rights reserved.
Resumo:
It has been shown in an earlier paper that I-realizability of a unate function F of up to six variables corresponds to ' compactness ' of the plot of F on a Karnaugh map. Here, an algorithm has been presented to synthesize on a Karnaugh map a non-threahold function of up to Bix variables with the minimum number of threshold gates connected in cascade. Incompletely specified functions can also be treated. No resort to inequalities is made and no pre-processing (such as positivizing and ordering) of the given switching function is required.
Resumo:
In a storage system where individual storage nodes are prone to failure, the redundant storage of data in a distributed manner across multiple nodes is a must to ensure reliability. Reed-Solomon codes possess the reconstruction property under which the stored data can be recovered by connecting to any k of the n nodes in the network across which data is dispersed. This property can be shown to lead to vastly improved network reliability over simple replication schemes. Also of interest in such storage systems is the minimization of the repair bandwidth, i.e., the amount of data needed to be downloaded from the network in order to repair a single failed node. Reed-Solomon codes perform poorly here as they require the entire data to be downloaded. Regenerating codes are a new class of codes which minimize the repair bandwidth while retaining the reconstruction property. This paper provides an overview of regenerating codes including a discussion on the explicit construction of optimum codes.
Resumo:
Artificial neural networks (ANNs) have shown great promise in modeling circuit parameters for computer aided design applications. Leakage currents, which depend on process parameters, supply voltage and temperature can be modeled accurately with ANNs. However, the complex nature of the ANN model, with the standard sigmoidal activation functions, does not allow analytical expressions for its mean and variance. We propose the use of a new activation function that allows us to derive an analytical expression for the mean and a semi-analytical expression for the variance of the ANN-based leakage model. To the best of our knowledge this is the first result in this direction. Our neural network model also includes the voltage and temperature as input parameters, thereby enabling voltage and temperature aware statistical leakage analysis (SLA). All existing SLA frameworks are closely tied to the exponential polynomial leakage model and hence fail to work with sophisticated ANN models. In this paper, we also set up an SLA framework that can efficiently work with these ANN models. Results show that the cumulative distribution function of leakage current of ISCAS'85 circuits can be predicted accurately with the error in mean and standard deviation, compared to Monte Carlo-based simulations, being less than 1% and 2% respectively across a range of voltage and temperature values.
Resumo:
The profitability of fast-growing trees was investigated in the northeastern and eastern provinces of Thailand. The financial, economic, and tentative environmental-economic profitability was determined separately for three fast-growing plantation tree species and for three categories of plantation managers: the private industry, the state (the Royal Forest Department) and the farmers. Fast-growing tree crops were also compared with teak (Tectona grandis), a traditional medium or long rotation species, and Para rubber (Hevea brasiliensis) which presently is the most common cultivated tree in Thailand. The optimal rotation for Eucalyptus camaldulensis pulpwood production was eight years. This was the most profitable species in pulpwood production. In sawlog production Acacia mangium and Melia azedarach showed a better financial profitability. Para rubber was more profitable and teak less profitable than the three fast-growing species. The economic profitability was higher than the financial one, and the tentative environmental-economic profitability was slightly higher than the economic profitability. The profitability of tree growing is sensitive to plantation yields and labour cost changes and especially to wood prices. Management options which aim at pulpwood production are more sensitive to input or output changes than those options which include sawlog production. There is an urgent need to improve the growth and yield data and to study the environmental impacts of tree plantations for all species and plantation types.
Resumo:
Uveal melanoma (UM) is the second most common primary intraocular cancer worldwide. It is a relatively rare cancer, but still the second most common type of primary malignant melanoma in humans. UM is a slowly growing tumor, and gives rise to distant metastasis mainly to the liver via the bloodstream. About 40% of patients with UM die of metastatic disease within 10 years of diagnosis, irrespective of the type of treatment. During the last decade, two main lines of research have aimed to achieve enhanced understanding of the metastasis process and accurate prognosis of patients with UM. One emphasizes the characteristics of tumor cells, particularly their nucleoli, and markers of proliferation, and the other the characteristics of tumor blood vessels. Of several morphometric measurements, the mean diameter of the ten largest nucleoli (MLN) has become the most widely applied. A large MLN has consistently been associated with high likelihood of dying from UM. Blood vessels are of paramount importance in metastasis of UM. Different extravascular matrix patterns can be seen in UM, like loops and networks. This presence is associated with death from metastatic melanoma. However, the density of microvessels is also of prognostic importance. This study was undertaken to help understanding some histopathological factors which might contribute to developing metastasis in UM patients. Factors which could be related to tumor progression to metastasis disease, namely nucleolar size, MLN, microvascular density (MVD), cell proliferation, and The Insulin-like Growth Factor 1 Receptor(IGF-1R), were investigated. The primary aim of this thesis was to study the relationship between prognostic factors such as tumor cell nucleolar size, proliferation, extravascular matrix patterns, and dissemination of UM, and to assess to what extent there is a relationship to metastasis. The secondary goal was to develop a multivariate model which includes MLN and cell proliferation in addition to MVD, and which would fit better with population-based, melanoma-related survival data than previous models. I studied 167 patients with UM, who developed metastasis even after a very long time following removal of the eye, metastatic disease was the main cause of death, as documented in the Finnish Cancer Registry and on death certificates. Using an independent population-based data set, it was confirmed that MLN and extravascular matrix loops and networks were unrelated, independent predictors of survival in UM. Also, it has been found that multivariate models including MVD in addition to MLN fitted significantly better with survival data than models which excluded MVD. This supports the idea that both the characteristics of the blood vessels and the cells are important, and the future direction would be to look for the gene expression profile, whether it is associated more with MVD or MLN. The former relates to the host response to the tumor and may not be as tightly associated with the gene expression profile, yet most likely involved in the process of hematogenous metastasis. Because fresh tumor material is needed for reliable genetic analysis, such analysis could not be performed Although noninvasive detection of certain extravascular matrix patterns is now technically possible,in managing patients with UM, this study and tumor genetics suggest that such noninvasive methods will not fully capture the process of clinical metastasis. Progress in resection and biopsy techniques is likely in the near future to result in fresh material for the ophthalmic pathologist to correlate angiographic data, histopathological characteristics such as MLN, and genetic data. This study supported the theory that tumors containing epithelioid cells grow faster and have poorer prognosis when studied by cell proliferation in UM based on Ki-67 immunoreactivity. Cell proliferation index fitted best with the survival data when combined with MVD, MLN, and presence of epithelioid cells. Analogous with the finding that high MVD in primary UM is associated with shorter time to metastasis than low MVD, high MVD in hepatic metastasis tends to be associated with shorter survival after diagnosis of metastasis. Because the liver is the main organ for metastasis from UM, growth factors largely produced in the liver hepatocyte growth factor, epidermal growth factor and insulin-like growth factor-1 (IGF-1) together with their receptors may have a role in the homing and survival of metastatic cells. Therefore the association between immunoreactivity for IGF-1R in primary UM and metastatic death was studied. It was found that immunoreactivity for IGF-IR did not independently predict metastasis from primary UM in my series.
Resumo:
In earlier work, nonisomorphic graphs have been converted into networks to realize Multistage Interconnection networks, which are topologically nonequivalent to the Baseline network. The drawback of this technique is that these nonequivalent networks are not guaranteed to be self-routing, because each node in the graph model can be replaced by a (2 × 2) switch in any one of the four different configurations. Hence, the problem of routing in these networks remains unsolved. Moreover, nonisomorphic graphs were obtained by interconnecting bipartite loops in a heuristic manner; the heuristic nature of this procedure makes it difficult to guarantee full connectivity in large networks. We solve these problems through a direct approach, in which a matrix model for self-routing networks is developed. An example is given to show that this model encompases nonequivalent self-routing networks. This approach has the additional advantage in that the matrix model itself ensures full connectivity.
Resumo:
The work presented here has focused on the role of cation-chloride cotransporters (CCCs) in (1) the regulation of intracellular chloride concentration within postsynaptic neurons and (2) on the consequent effects on the actions of the neurotransmitter gamma-aminobutyric acid (GABA) mediated by GABAA receptors (GABAARs) during development and in pathophysiological conditions such as epilepsy. In addition, (3) we found that a member of the CCC family, the K-Cl cotransporter isoform 2 (KCC2), has a structural role in the development of dendritic spines during the differentiation of pyramidal neurons. Despite the large number of publications dedicated to regulation of intracellular Cl-, our understanding of the underlying mechanisms is not complete. Experiments on GABA actions under resting steady-state have shown that the effect of GABA shifts from depolarizing to hyperpolarizing during maturation of cortical neurons. However, it remains unclear, whether conclusions from these steady-state measurements can be extrapolated to the highly dynamic situation within an intact and active neuronal network. Indeed, GABAergic signaling in active neuronal networks results in a continuous Cl- load, which must be constantly removed by efficient Cl- extrusion mechanisms. Therefore, it seems plausible to suggest that key parameters are the efficacy and subcellular distribution of Cl- transporters rather than the polarity of steady-state GABA actions. A further related question is: what are the mechanisms of Cl- regulation and homeostasis during pathophysiological conditions such as epilepsy in adults and neonates? Here I present results that were obtained by means of a newly developed method of measurements of the efficacy of a K-Cl cotransport. In Study I, the developmental profile of KCC2 functionality during development was analyzed both in dissociated neuronal cultures and in acute hippocampal slices. A novel method of photolysis of caged GABA in combination with Cl- loading to the somata was used in this study to assess the extrusion efficacy of KCC2. We demonstrated that these two preparations exhibit a different temporal profile of functional KCC2 upregulation. In Study II, we reported an observation of highly distorted dendritic spines in neurons cultured from KCC2-/- embryos. During their development in the culture dish, KCC2-lacking neurons failed to develop mature, mushroom-shaped dendritic spines but instead maintained an immature phenotype of long, branching and extremely motile protrusions. It was shown that the role of KCC2 in spine maturation is not based on its transport activity, but is mediated by interactions with cytoskeletal proteins. Another important player in Cl- regulation, NKCC1 and its role in the induction and maintenance of native Cl- gradients between the axon initial segment (AIS) and soma was the subject of Study III. There we demonstrated that this transporter mediates accumulation of Cl- in the axon initial segment of neocortical and hippocampal principal neurons. The results suggest that the reversal potential of the GABAA response triggered by distinct populations of interneurons show large subcellular variations. Finally, a novel mechanism of fast post-translational upregulation of the membrane-inserted, functionally active KCC2 pool during in-vivo neonatal seizures and epileptiform-like activity in vitro was identified and characterized in Study IV. The seizure-induced KCC2 upregulation may act as an intrinsic antiepileptogenic mechanism.
Resumo:
The world of mapping has changed. Earlier, only professional experts were responsible for map production, but today ordinary people without any training or experience can become map-makers. The number of online mapping sites, and the number of volunteer mappers has increased significantly. The development of the technology, such as satellite navigation systems, Web 2.0, broadband Internet connections, and smartphones, have had one of the key roles in enabling the rise of volunteered geographic information (VGI). As opening governmental data to public is a current topic in many countries, the opening of high quality geographical data has a central role in this study. The aim of this study is to investigate how is the quality of spatial data produced by volunteers by comparing it with the map data produced by public authorities, to follow what occurs when spatial data are opened for users, and to get acquainted with the user profile of these volunteer mappers. A central part of this study is OpenStreetMap project (OSM), which aim is to create a map of the entire world by volunteers. Anyone can become an OpenStreetMap contributor, and the data created by the volunteers are free to use for anyone without restricting copyrights or license charges. In this study OpenStreetMap is investigated from two viewpoints. In the first part of the study, the aim was to investigate the quality of volunteered geographic information. A pilot project was implemented by following what occurs when a high-resolution aerial imagery is released freely to the OpenStreetMap contributors. The quality of VGI was investigated by comparing the OSM datasets with the map data of The National Land Survey of Finland (NLS). The quality of OpenStreetMap data was investigated by inspecting the positional accuracy and the completeness of the road datasets, as well as the differences in the attribute datasets between the studied datasets. Also the OSM community was under analysis and the development of the map data of OpenStreetMap was investigated by visual analysis. The aim of the second part of the study was to analyse the user profile of OpenStreetMap contributors, and to investigate how the contributors act when collecting data and editing OpenStreetMap. The aim was also to investigate what motivates users to map and how is the quality of volunteered geographic information envisaged. The second part of the study was implemented by conducting a web inquiry to the OpenStreetMap contributors. The results of the study show that the quality of OpenStreetMap data compared with the data of National Land Survey of Finland can be defined as good. OpenStreetMap differs from the map of National Land Survey especially because of the amount of uncertainty, for example because of the completeness and uniformity of the map are not known. The results of the study reveal that opening spatial data increased notably the amount of the data in the study area, and both the positional accuracy and completeness improved significantly. The study confirms the earlier arguments that only few contributors have created the majority of the data in OpenStreetMap. The inquiry made for the OpenStreetMap users revealed that the data are most often collected by foot or by bicycle using GPS device, or by editing the map with the help of aerial imageries. According to the responses, the users take part to the OpenStreetMap project because they want to make maps better, and want to produce maps, which have information that is up-to-date and cannot be found from any other maps. Almost all of the users exploit the maps by themselves, most popular methods being downloading the map into a navigator or into a mobile device. The users regard the quality of OpenStreetMap as good, especially because of the up-to-dateness and the accuracy of the map.
Resumo:
We propose a method to compute a probably approximately correct (PAC) normalized histogram of observations with a refresh rate of Theta(1) time units per histogram sample on a random geometric graph with noise-free links. The delay in computation is Theta(root n) time units. We further extend our approach to a network with noisy links. While the refresh rate remains Theta(1) time units per sample, the delay increases to Theta(root n log n). The number of transmissions in both cases is Theta(n) per histogram sample. The achieved Theta(1) refresh rate for PAC histogram computation is a significant improvement over the refresh rate of Theta(1/log n) for histogram computation in noiseless networks. We achieve this by operating in the supercritical thermodynamic regime where large pathways for communication build up, but the network may have more than one component. The largest component however will have an arbitrarily large fraction of nodes in order to enable approximate computation of the histogram to the desired level of accuracy. Operation in the supercritical thermodynamic regime also reduces energy consumption. A key step in the proof of our achievability result is the construction of a connected component having bounded degree and any desired fraction of nodes. This construction may also prove useful in other communication settings on the random geometric graph.
Resumo:
We present a distributed algorithm that finds a maximal edge packing in O(Δ + log* W) synchronous communication rounds in a weighted graph, independent of the number of nodes in the network; here Δ is the maximum degree of the graph and W is the maximum weight. As a direct application, we have a distributed 2-approximation algorithm for minimum-weight vertex cover, with the same running time. We also show how to find an f-approximation of minimum-weight set cover in O(f2k2 + fk log* W) rounds; here k is the maximum size of a subset in the set cover instance, f is the maximum frequency of an element, and W is the maximum weight of a subset. The algorithms are deterministic, and they can be applied in anonymous networks.