903 resultados para internet data centers
Resumo:
Globalization, along with its digital and information communication technology counterparts, including the Internet and cyberspace, may signify a whole new era for human rights, characterized by new tensions, challenges, and risks for human rights, as well as new opportunities. Human Rights and Risks in the Digital Era: Globalization and the Effects of Information Technologies explores the emergence and evolution of ‘digital’ rights that challenge and transform more traditional legal, political, and historical understandings of human rights. Academic and legal scholars will explore individual, national, and international democratic dilemmas--sparked by economic and environmental crises, media culture, data collection, privatization, surveillance, and security--that alter the way individuals and societies think about, regulate, and protect rights when faced with new challenges and threats. The book not only uncovers emerging changes in discussions of human rights, it proposes legal remedies and public policies to mitigate the challenges posed by new technologies and globalization.
Resumo:
Participation is located in a living and complex environment. Traditional means of participation are only partially able to meet the new environmental requirements. In need are forms of participation which take into account the new opportunities of the environment and residents expertise. Internet map applications are an important channel of participation which potential is in many respects as unexplored and unutilized. They are commonly in inventory the perspectives, bringing out the concerns of the area, and only little for discussing about solutions. Interpretation is usually made by designer. This study focuses on evaluation and development of Internet map applications in strategic land use planning. Subject matter is dealt from designer and the inhabitants point of view. City Planning Department of Helsinki s Esikau-punkien Renessanssi -project and the associated SoftGIS survey acts as the case study. In the beginning of the study I tried to recognize the new environment in which the Internet map applications are placed. The research question is, what kind of challenges and opportunities the e-participation confronts in information society, and what kind of requirements the environmental creates for development of an application. In chapter three I evaluate how successfully these requirements are met in Esikau-punkien Renessanssi -project. I m trying to examine how the application would look like if the environment and the characteristics of the project are met better. The approach is experimental and I try to find new ways to take advantage of Internet maps. I try not to be too limited to current projects and studies. For example, I try to examine how social media and Web 2.0 opportunities can be utilized, and how the learning and shaping nature of planning may be reached in Internet map environment. In chapter four I have developed a new concept for the Esikaupunkien Renessanssi map application, and made images to visualize its operation in practice. I collect all the data in the research and gather it into a new service. The aim is to create a better application for Esikaupunkien Renessanssi -project, which takes into account the living and complex environment of participation and responds to threats and opportunities arising from it. The presented outcome is in many respects different from the current query. In the new service the role of residents is to interact and learn. The traditional standing of the Internet maps and the position of resident as one-sided information donor are questioned. In the concept, the residents innovate and make interpretations too. Influences are taken from a number of modern applications and for example services that make use of social media. The user experience is intended to be interactive, fast and easy. The idea is that the service keeps you up to date with planning matters, not the other way around. Service guides inhabitants, striving to achieve a deeper knowledge of the project's objectives as well as the dynamics and realities that different individuals experience.
Resumo:
Diruthenium(II1) compounds, Ru20(02CAr)2(MeCN)4(PPh3)2(C104)(z1~) Hazn0d R U ~ O ( O ~ C A ~ ) ~(2() P(PA~r ~= )P~h,C6H4-p-OMe), were prepared by reacting R U ~ C I ( O ~ CaAnd~ P)P~h 3 in MeCN and characterized by analytical and spectral data. The molecular structures of 1 with Ar = Ph and of 2 with Ar = C&p-OMe were determined by X-ray crystallography. Crystal data for Ru~~(~~CP~)~(M~CN),(PP~(~la)):~ m(oCnIoc~lin,ic), n~/~cH, ~a O= 27.722 (3) A, b = 10.793 (2) A, c = 23.445 ( 2 )A , fi = 124.18 (l)', V = 5803 A3, and 2 = 4. Cr stal data for Ru~O(O~CC~H~-~-O(M2b~): )o~rth(orPhoPm~bi~c, )Pn~n a, a = 22.767 (5) A, b = 22.084 (7) A, c = 12.904 (3) 1, V = 6488 AS; and 2 = 4. Both 1 and 2 have an (Ruz0(02CAr)z2t1 core that is analogous to the diiron core present in the oxidized form of the nonheme respiratory protein hemerythrin. The Ru-Ru distances of 3.237 (1) and 3.199 ( I ) A observed in 1 and 2, respectively, are similar to the M-M distances known in other model systems. The essentially diamagnetic nature of 1 and 2 is due to the presence of two strongly interacting t22 Ru"' centers. The intense colors of 1 (blue) and 2 (purple) are due to the charge-transfer transition involving an ( R ~ ~ ( f i - 0m)o~ie~ty.) The presence of labile MeCN and carboxylato ancillary ligands in I and 2, respectively, makes these systems reactive toward amine and heterocyclic bases.
Resumo:
Prediction of variable bit rate compressed video traffic is critical to dynamic allocation of resources in a network. In this paper, we propose a technique for preprocessing the dataset used for training a video traffic predictor. The technique involves identifying the noisy instances in the data using a fuzzy inference system. We focus on three prediction techniques, namely, linear regression, neural network and support vector regression and analyze their performance on H.264 video traces. Our experimental results reveal that data preprocessing greatly improves the performance of linear regression and neural network, but is not effective on support vector regression.
Resumo:
Users can rarely reveal their information need in full detail to a search engine within 1--2 words, so search engines need to "hedge their bets" and present diverse results within the precious 10 response slots. Diversity in ranking is of much recent interest. Most existing solutions estimate the marginal utility of an item given a set of items already in the response, and then use variants of greedy set cover. Others design graphs with the items as nodes and choose diverse items based on visit rates (PageRank). Here we introduce a radically new and natural formulation of diversity as finding centers in resistive graphs. Unlike in PageRank, we do not specify the edge resistances (equivalently, conductances) and ask for node visit rates. Instead, we look for a sparse set of center nodes so that the effective conductance from the center to the rest of the graph has maximum entropy. We give a cogent semantic justification for turning PageRank thus on its head. In marked deviation from prior work, our edge resistances are learnt from training data. Inference and learning are NP-hard, but we give practical solutions. In extensive experiments with subtopic retrieval, social network search, and document summarization, our approach convincingly surpasses recently-published diversity algorithms like subtopic cover, max-marginal relevance (MMR), Grasshopper, DivRank, and SVMdiv.
Resumo:
This paper primarily intends to develop a GIS (geographical information system)-based data mining approach for optimally selecting the locations and determining installed capacities for setting up distributed biomass power generation systems in the context of decentralized energy planning for rural regions. The optimal locations within a cluster of villages are obtained by matching the installed capacity needed with the demand for power, minimizing the cost of transportation of biomass from dispersed sources to power generation system, and cost of distribution of electricity from the power generation system to demand centers or villages. The methodology was validated by using it for developing an optimal plan for implementing distributed biomass-based power systems for meeting the rural electricity needs of Tumkur district in India consisting of 2700 villages. The approach uses a k-medoid clustering algorithm to divide the total region into clusters of villages and locate biomass power generation systems at the medoids. The optimal value of k is determined iteratively by running the algorithm for the entire search space for different values of k along with demand-supply matching constraints. The optimal value of the k is chosen such that it minimizes the total cost of system installation, costs of transportation of biomass, and transmission and distribution. A smaller region, consisting of 293 villages was selected to study the sensitivity of the results to varying demand and supply parameters. The results of clustering are represented on a GIS map for the region.
Resumo:
The presence of a large number of spectral bands in the hyperspectral images increases the capability to distinguish between various physical structures. However, they suffer from the high dimensionality of the data. Hence, the processing of hyperspectral images is applied in two stages: dimensionality reduction and unsupervised classification techniques. The high dimensionality of the data has been reduced with the help of Principal Component Analysis (PCA). The selected dimensions are classified using Niche Hierarchical Artificial Immune System (NHAIS). The NHAIS combines the splitting method to search for the optimal cluster centers using niching procedure and the merging method is used to group the data points based on majority voting. Results are presented for two hyperspectral images namely EO-1 Hyperion image and Indian pines image. A performance comparison of this proposed hierarchical clustering algorithm with the earlier three unsupervised algorithms is presented. From the results obtained, we deduce that the NHAIS is efficient.
Resumo:
Whoever in future will need information about a location or an area, either literature, measurement data, photos or administrative information, might only click on that spot on a screen map in the internet. A search programme started thereby will offer all available information in databanks. A step forward to such a solution, to the retrieval of location related literature and measurement data from different kinds of databanks, is presented by the project “Baltic Sea Web” (http://www.baltic.vtt.fi/ demonstrator/index.html). The basic idea was to make the available information about a certain location accessible via a link of their geographical coordinates, longitude and latitude, to a map in a web browser
Resumo:
The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.
Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.
Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.
Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.
Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.
Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.
Resumo:
Crustal structure in Southern California is investigated using travel times from over 200 stations and thousands of local earthquakes. The data are divided into two sets of first arrivals representing a two-layer crust. The Pg arrivals have paths that refract at depths near 10 km and the Pn arrivals refract along the Moho discontinuity. These data are used to find lateral and azimuthal refractor velocity variations and to determine refractor topography.
In Chapter 2 the Pn raypaths are modeled using linear inverse theory. This enables statistical verification that static delays, lateral slowness variations and anisotropy are all significant parameters. However, because of the inherent size limitations of inverse theory, the full array data set could not be processed and the possible resolution was limited. The tomographic backprojection algorithm developed for Chapters 3 and 4 avoids these size problems. This algorithm allows us to process the data sequentially and to iteratively refine the solution. The variance and resolution for tomography are determined empirically using synthetic structures.
The Pg results spectacularly image the San Andreas Fault, the Garlock Fault and the San Jacinto Fault. The Mojave has slower velocities near 6.0 km/s while the Peninsular Ranges have higher velocities of over 6.5 km/s. The San Jacinto block has velocities only slightly above the Mojave velocities. It may have overthrust Mojave rocks. Surprisingly, the Transverse Ranges are not apparent at Pg depths. The batholiths in these mountains are possibly only surficial.
Pn velocities are fast in the Mojave, slow in Southern California Peninsular Ranges and slow north of the Garlock Fault. Pn anisotropy of 2% with a NWW fast direction exists in Southern California. A region of thin crust (22 km) centers around the Colorado River where the crust bas undergone basin and range type extension. Station delays see the Ventura and Los Angeles Basins but not the Salton Trough, where high velocity rocks underlie the sediments. The Transverse Ranges have a root in their eastern half but not in their western half. The Southern Coast Ranges also have a thickened crust but the Peninsular Ranges have no major root.
Resumo:
Smartphones and other powerful sensor-equipped consumer devices make it possible to sense the physical world at an unprecedented scale. Nearly 2 million Android and iOS devices are activated every day, each carrying numerous sensors and a high-speed internet connection. Whereas traditional sensor networks have typically deployed a fixed number of devices to sense a particular phenomena, community networks can grow as additional participants choose to install apps and join the network. In principle, this allows networks of thousands or millions of sensors to be created quickly and at low cost. However, making reliable inferences about the world using so many community sensors involves several challenges, including scalability, data quality, mobility, and user privacy.
This thesis focuses on how learning at both the sensor- and network-level can provide scalable techniques for data collection and event detection. First, this thesis considers the abstract problem of distributed algorithms for data collection, and proposes a distributed, online approach to selecting which set of sensors should be queried. In addition to providing theoretical guarantees for submodular objective functions, the approach is also compatible with local rules or heuristics for detecting and transmitting potentially valuable observations. Next, the thesis presents a decentralized algorithm for spatial event detection, and describes its use detecting strong earthquakes within the Caltech Community Seismic Network. Despite the fact that strong earthquakes are rare and complex events, and that community sensors can be very noisy, our decentralized anomaly detection approach obtains theoretical guarantees for event detection performance while simultaneously limiting the rate of false alarms.
Resumo:
[ES]Este proyecto tiene como objetivo el diseño e implementación de una herramienta para la integración de los datos de calidad de servicio (QoS) en Internet publicados por el regulador español. Se trata de una herramienta que pretende, por una parte, unificar los diferentes formatos en que se publican los datos de QoS y, por otra, facilitar la conservación de los datos favoreciendo la obtención de históricos, datos estadísticos e informes. En la página del regulador sólo se puede acceder a los datos de los 5 últimos trimestres y los datos anteriormente publicados no permanecen accesibles si no que son sustituidos por los más recientes por lo que, desde el punto de vista del usuario final, estos datos se pierden. La herramienta propuesta en este trabajo soluciona este problema además de unificar formatos y facilitar el acceso a los datos de interés. Para el diseño del sistema se han usado las últimas tecnologías en desarrollo de aplicaciones web con lo que la potencia y posibilidad de futuras ampliaciones son elevadas.
Resumo:
Techniques are developed for estimating activity profiles in fixed bed reactors and catalyst deactivation parameters from operating reactor data. These techniques are applicable, in general, to most industrial catalytic processes. The catalytic reforming of naphthas is taken as a broad example to illustrate the estimation schemes and to signify the physical meaning of the kinetic parameters of the estimation equations. The work is described in two parts. Part I deals with the modeling of kinetic rate expressions and the derivation of the working equations for estimation. Part II concentrates on developing various estimation techniques.
Part I: The reactions used to describe naphtha reforming are dehydrogenation and dehydroisomerization of cycloparaffins; isomerization, dehydrocyclization and hydrocracking of paraffins; and the catalyst deactivation reactions, namely coking on alumina sites and sintering of platinum crystallites. The rate expressions for the above reactions are formulated, and the effects of transport limitations on the overall reaction rates are discussed in the appendices. Moreover, various types of interaction between the metallic and acidic active centers of reforming catalysts are discussed as characterizing the different types of reforming reactions.
Part II: In catalytic reactor operation, the activity distribution along the reactor determines the kinetics of the main reaction and is needed for predicting the effect of changes in the feed state and the operating conditions on the reactor output. In the case of a monofunctional catalyst and of bifunctional catalysts in limiting conditions, the cumulative activity is sufficient for predicting steady reactor output. The estimation of this cumulative activity can be carried out easily from measurements at the reactor exit. For a general bifunctional catalytic system, the detailed activity distribution is needed for describing the reactor operation, and some approximation must be made to obtain practicable estimation schemes. This is accomplished by parametrization techniques using measurements at a few points along the reactor. Such parametrization techniques are illustrated numerically with a simplified model of naphtha reforming.
To determine long term catalyst utilization and regeneration policies, it is necessary to estimate catalyst deactivation parameters from the the current operating data. For a first order deactivation model with a monofunctional catalyst or with a bifunctional catalyst in special limiting circumstances, analytical techniques are presented to transform the partial differential equations to ordinary differential equations which admit more feasible estimation schemes. Numerical examples include the catalytic oxidation of butene to butadiene and a simplified model of naphtha reforming. For a general bifunctional system or in the case of a monofunctional catalyst subject to general power law deactivation, the estimation can only be accomplished approximately. The basic feature of an appropriate estimation scheme involves approximating the activity profile by certain polynomials and then estimating the deactivation parameters from the integrated form of the deactivation equation by regression techniques. Different bifunctional systems must be treated by different estimation algorithms, which are illustrated by several cases of naphtha reforming with different feed or catalyst composition.
Resumo:
185 p.
Resumo:
[ES]En este trabajo fin de grado se presenta un estudio de diferentes metodologías para la estimación de la velocidad de acceso a Internet. En el estudio no sólo se analizan las metodologías de las herramientas más extendidas sino que también se tienen en cuenta los factores de influencia principales examinándose su afección global en los resultados obtenidos. Los resultados de este estudio permitirán a los distintos agentes implicados contar con información de interés para el desarrollo de sus propias herramientas. Además, las conclusiones del estudio podrían conducir, en un futuro próximo, a la estandarización de una metodología unificada, por parte de organismos internacionales del sector, que permita comparativas de datos así como la verificación de los acuerdos de nivel de servicio, de interés para usuarios, operadores y reguladores.