955 resultados para Partition graphique


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a novel protocol which uses the Internet Domain Name System (DNS) to partition Web clients into disjoint sets, each of which is associated with a single DNS server. We define an L-DNS cluster to be a grouping of Web Clients that use the same Local DNS server to resolve Internet host names. We identify such clusters in real-time using data obtained from a Web Server in conjunction with that server's Authoritative DNS―both instrumented with an implementation of our clustering algorithm. Using these clusters, we perform measurements from four distinct Internet locations. Our results show that L-DNS clustering enables a better estimation of proximity of a Web Client to a Web Server than previously proposed techniques. Thus, in a Content Distribution Network, a DNS-based scheme that redirects a request from a web client to one of many servers based on the client's name server coordinates (e.g., hops/latency/loss-rates between the client and servers) would perform better with our algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Object detection can be challenging when the object class exhibits large variations. One commonly-used strategy is to first partition the space of possible object variations and then train separate classifiers for each portion. However, with continuous spaces the partitions tend to be arbitrary since there are no natural boundaries (for example, consider the continuous range of human body poses). In this paper, a new formulation is proposed, where the detectors themselves are associated with continuous parameters, and reside in a parameterized function space. There are two advantages of this strategy. First, a-priori partitioning of the parameter space is not needed; the detectors themselves are in a parameterized space. Second, the underlying parameters for object variations can be learned from training data in an unsupervised manner. In profile face detection experiments, at a fixed false alarm number of 90, our method attains a detection rate of 75% vs. 70% for the method of Viola-Jones. In hand shape detection, at a false positive rate of 0.1%, our method achieves a detection rate of 99.5% vs. 98% for partition based methods. In pedestrian detection, our method reduces the miss detection rate by a factor of three at a false positive rate of 1%, compared with the method of Dalal-Triggs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The need for the ability to cluster unknown data to better understand its relationship to know data is prevalent throughout science. Besides a better understanding of the data itself or learning about a new unknown object, cluster analysis can help with processing data, data standardization, and outlier detection. Most clustering algorithms are based on known features or expectations, such as the popular partition based, hierarchical, density-based, grid based, and model based algorithms. The choice of algorithm depends on many factors, including the type of data and the reason for clustering, nearly all rely on some known properties of the data being analyzed. Recently, Li et al. proposed a new universal similarity metric, this metric needs no prior knowledge about the object. Their similarity metric is based on the Kolmogorov Complexity of objects, the objects minimal description. While the Kolmogorov Complexity of an object is not computable, in "Clustering by Compression," Cilibrasi and Vitanyi use common compression algorithms to approximate the universal similarity metric and cluster objects with high success. Unfortunately, clustering using compression does not trivially extend to higher dimensions. Here we outline a method to adapt their procedure to images. We test these techniques on images of letters of the alphabet.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A method for deformable shape detection and recognition is described. Deformable shape templates are used to partition the image into a globally consistent interpretation, determined in part by the minimum description length principle. Statistical shape models enforce the prior probabilities on global, parametric deformations for each object class. Once trained, the system autonomously segments deformed shapes from the background, while not merging them with adjacent objects or shadows. The formulation can be used to group image regions based on any image homogeneity predicate; e.g., texture, color, or motion. The recovered shape models can be used directly in object recognition. Experiments with color imagery are reported.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Adaptive Resonance Theory (ART) models are real-time neural networks for category learning, pattern recognition, and prediction. Unsupervised fuzzy ART and supervised fuzzy ARTMAP synthesize fuzzy logic and ART networks by exploiting the formal similarity between the computations of fuzzy subsethood and the dynamics of ART category choice, search, and learning. Fuzzy ART self-organizes stable recognition categories in response to arbitrary sequences of analog or binary input patterns. It generalizes the binary ART 1 model, replacing the set-theoretic: intersection (∩) with the fuzzy intersection (∧), or component-wise minimum. A normalization procedure called complement coding leads to a symmetric: theory in which the fuzzy inter:>ec:tion and the fuzzy union (∨), or component-wise maximum, play complementary roles. Complement coding preserves individual feature amplitudes while normalizing the input vector, and prevents a potential category proliferation problem. Adaptive weights :otart equal to one and can only decrease in time. A geometric interpretation of fuzzy AHT represents each category as a box that increases in size as weights decrease. A matching criterion controls search, determining how close an input and a learned representation must be for a category to accept the input as a new exemplar. A vigilance parameter (p) sets the matching criterion and determines how finely or coarsely an ART system will partition inputs. High vigilance creates fine categories, represented by small boxes. Learning stops when boxes cover the input space. With fast learning, fixed vigilance, and an arbitrary input set, learning stabilizes after just one presentation of each input. A fast-commit slow-recode option allows rapid learning of rare events yet buffers memories against recoding by noisy inputs. Fuzzy ARTMAP unites two fuzzy ART networks to solve supervised learning and prediction problems. A Minimax Learning Rule controls ARTMAP category structure, conjointly minimizing predictive error and maximizing code compression. Low vigilance maximizes compression but may therefore cause very different inputs to make the same prediction. When this coarse grouping strategy causes a predictive error, an internal match tracking control process increases vigilance just enough to correct the error. ARTMAP automatically constructs a minimal number of recognition categories, or "hidden units," to meet accuracy criteria. An ARTMAP voting strategy improves prediction by training the system several times using different orderings of the input set. Voting assigns confidence estimates to competing predictions given small, noisy, or incomplete training sets. ARPA benchmark simulations illustrate fuzzy ARTMAP dynamics. The chapter also compares fuzzy ARTMAP to Salzberg's Nested Generalized Exemplar (NGE) and to Simpson's Fuzzy Min-Max Classifier (FMMC); and concludes with a summary of ART and ARTMAP applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Robert Briscoe was the Dublin born son of Lithuanian and German-Jewish immigrants. As a young man he joined Sinn Féin and was an important figure in the War of Independence due to a role as one of the IRA’s main gun-procuring agents. He took the anti-Treaty side during an internecine Civil War, mainly due to the influence of Eamon de Valera and retained a filial devotion towards him for the rest of his life. In 1926 he was a founding member of Fianna Fáil, de Valera’s breakaway republican party, which would dominate twentieth-century Irish politics. He was first elected as a Fianna Fáil T.D. (Teachta Dála, Deputy to the Dáil) in 1927, and successfully defended his seat eleven times becoming the first Jewish Lord Mayor of Dublin in 1956, an honour that was repeated in 1961. On this basis alone, it can be argued that Briscoe was a significant presence in an embryonic Irish political culture; however, when his role in the 1930s Jewish immigration endeavor is acknowledged, it is clear that he played a unique part in one of the most contentious political and social discourses of the pre-war years. This was reinforced when Briscoe embraced Zionism in a belated realisation that the survival of his European co-religionists could only be guaranteed if an independent Jewish state existed. This information is to a certain degree public knowledge; however, the full extent of his involvement as an immigration advocate for potential Jewish refugees, and the seniority he achieved in the New Zionist Organisation (Revisionists) has not been fully recognised. This is partly explicable because researchers have based their assessment of Briscoe on an incomplete political archive in the National Library of Ireland (NLI). The vast majority of documentation pertaining to his involvement in the immigration endeavor has not been available to scholars and remains the private property of Robert Briscoe’s son, Ben Briscoe. The lack of immigration files in the NLI was reinforced by the fact that information about Briscoe’s Revisionist engagement was donated to the Jabotinsky Institute in Tel Aviv and can only be accessed physically by visiting Israel. Therefore, even though these twin endeavors have been commented on by a number of academics, their assessments have tended to be based on an incomplete archive, which was supplemented by Briscoe’s autobiographical memoir published in 1958. This study will attempt to fill in the missing gaps in Briscoe’s complex political narrative by incorporating the rarely used private papers of Robert Briscoe, and the difficult to access Briscoe files in Tel Aviv. This undertaking was only possible when Mr.Ben Briscoe graciously granted me full and unrestricted access to his father’s papers, and after a month-long research trip to the Jabotinsky Institute in Tel Aviv. Access to this rarely used documentation facilitated a holistic examination of Briscoe’s complex and multifaceted political reality. It revealed the full extent of Briscoe’s political and social evolution as the Nazi instigated Jewish emigration crisis reached catastrophic proportions. He was by turn Fianna Fáil nationalist, Jewish immigration advocate and senior Revisionist actor on a global stage. The study will examine the contrasting political and social forces that initiated each stage of Briscoe’s Zionist awakening, and in the process will fill a major gap in Irish-Jewish historiography by revealing the full extent of his Revisionist engagement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Flavour release from food is determined by the binding of flavours to other food ingredients and the partition of flavour molecules among different phases. Food emulsions are used as delivery systems for food flavours, and tailored structuring in emulsions provides novel means to better control flavour release. The current study investigated four structured oil-in-water emulsions with structuring in the oil phase, oil-water interface, and water phase. Oil phase structuring was achieved by the formation of monoglyceride (MG) liquid crystals in the oil droplets (MG structured emulsions). Structured interface was created by the adsorption of a whey protein isolate (WPI)-pectin double layer at the interface (multilayer emulsion). Water phase structured emulsions referred to emulsion filled protein gels (EFP gels), where emulsion droplets were embedded in WPI gel network, and emulsions with maltodextrins (MDs) of different dextrose-equivalent (DE) values. Flavour compounds with different physicochemical properties were added into the emulsions, and flavour release (release rate, headspace concentration and air-emulsion partition coefficient) was described by GC headspace analysis. Emulsion structures, including crystalline structure, particle size, emulsion stability, rheology, texture, and microstructures, were characterized using differential scanning calorimetry and X-ray diffraction, light scattering, multisample analytical centrifuge, rheometry, texture analysis, and confocal laser scanning microscopy, respectively. In MG structured emulsions, MG self-assembled into liquid crystalline structures and stable β-form crystals were formed after 3 days of storage at 25 °C. The inclusion of MG crystals allowed tween 20 stabilized emulsions to present viscoelastic properties, and it made WPI stabilized emulsions more sensitive to the change of pH and NaCl concentrations. Flavour compounds in MG structured emulsions had lower initial headspace concentration and air-emulsion partition coefficients than those in unstructured emulsions. Flavour release can be modulated by changing MG content, oil content and oil type. WPI-pectin multilayer emulsions were stable at pH 5.0, 4.0, and 3.0, but they presented extensive creaming when subjected to salt solutions with NaCl ≥ 150 mM and mixed with artificial salivas. Increase of pH from 5.0 to 7.0 resulted in higher headspace concentration but unchanged release rate, and increase of NaCl concentration led to increased headspace concentration and release rate. The study also showed that salivas could trigger higher release of hydrophobic flavours and lower release of hydrophilic flavours. In EFP gels, increases in protein content and oil content contributed to gels with higher storage modulus and force at breaking. Flavour compounds had significantly reduced release rates and air-emulsion partition coefficients in the gels than the corresponding ungelled emulsions, and the reduction was in line with the increase of protein content. Gels with stronger gel network but lower oil content were prepared, and lower or unaffected release rates of the flavours were observed. In emulsions containing maltodextrins, water was frozen at a much lower temperature, and emulsion stability was greatly improved when subjected to freeze-thawing. Among different MDs, MD DE 6 offered the emulsion the highest stability. Flavours had lower air-emulsion partition coefficients in the emulsions with MDs than those in the emulsion without MD. Moreover, the involvement of MDs in the emulsions allowed most flavours had similar release profiles before and after freeze-thaw treatment. The present study provided information about different structured emulsions as delivery systems for flavour compounds, and on how food structure can be designed to modulate flavour release, which could be helpful in the development of functional foods with improved flavour profile.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Eleanor Roosevelt, as a renowned humanitarian, portrayed an inconsistency by supporting Zionist ambitions for a national homeland in Palestine while simultaneously ignoring the rights of the indigenous Palestinians. Because of this dichotomy, this dissertation explores her attitudes, her disposition and her position in light of the conflict in the region. It conveys how her particular character traits interplayed with the cultural influences prevalent in mid-century America and encouraged her empathy with the plight of European Jews after the Holocaust. As she evolved politically, initially under the tutelage of Franklin Roosevelt and latterly as a UN delegate, she outgrew the anti-Semitism of the period to become a committed Zionist. Judging the Palestinians as ‘primitives’ incapable of self-government and heartened by Jewish development, she supported the partition of Palestine in November 1947. After the 1948 Arab-Israeli war the 800,000 Palestinian refugees encamped in neighbouring Arab states threatened to destabilise the region. Her solution was to discourage repatriation and to re-settle them in Iraq – a plan that directly contravened the principles of the December 1948 Universal Declaration of Human Rights proclaimed by the UN committee she had chaired. No detailed work has been conducted on these aspects of Eleanor Roosevelt’s life; this dissertation reveals a complex person rather than a model of ‘humanitarianism’, and one whose activities cannot be so simply categorised. In the eight chapters that follow, her own thoughts are disclosed through her ‘My Day’ newspaper column, through letters to friends and to members of the public that petitioned her, through a scrutiny of her articles, books and autobiography. This information was attained as a result of archival research in the US and in The Netherlands and was considered against an extensive range of secondary literature. During the Cold War, to offset Soviet incursion, Eleanor Roosevelt promoted Jewish usurpation of Palestinian lands with equanimity in order that an industrious Western-style democracy would bring stability to the region. These events facilitated the exposure of a latent Orientalism and an imperialistic lien that fostered paternalism in a woman new to the nuances of international diplomacy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Osmotic stress is a potent regulator of the normal function of cells that are exposed to osmotically active environments under physiologic or pathologic conditions. The ability of cells to alter gene expression and metabolic activity in response to changes in the osmotic environment provides an additional regulatory mechanism for a diverse array of tissues and organs in the human body. In addition to the activation of various osmotically- or volume-activated ion channels, osmotic stress may also act on the genome via a direct biophysical pathway. Changes in extracellular osmolality alter cell volume, and therefore, the concentration of intracellular macromolecules. In turn, intracellular macromolecule concentration is a key physical parameter affecting the spatial organization and pressurization of the nucleus. Hyper-osmotic stress shrinks the nucleus and causes it to assume a convoluted shape, whereas hypo-osmotic stress swells the nucleus to a size that is limited by stretch of the nuclear lamina and induces a smooth, round shape of the nucleus. These behaviors are consistent with a model of the nucleus as a charged core/shell structure pressurized by uneven partition of macromolecules between the nucleoplasm and the cytoplasm. These osmotically-induced alterations in the internal structure and arrangement of chromatin, as well as potential changes in the nuclear membrane and pores are hypothesized to influence gene transcription and/or nucleocytoplasmic transport. A further understanding of the biophysical and biochemical mechanisms involved in these processes would have important ramifications for a range of fields including differentiation, migration, mechanotransduction, DNA repair, and tumorigenesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop a model for stochastic processes with random marginal distributions. Our model relies on a stick-breaking construction for the marginal distribution of the process, and introduces dependence across locations by using a latent Gaussian copula model as the mechanism for selecting the atoms. The resulting latent stick-breaking process (LaSBP) induces a random partition of the index space, with points closer in space having a higher probability of being in the same cluster. We develop an efficient and straightforward Markov chain Monte Carlo (MCMC) algorithm for computation and discuss applications in financial econometrics and ecology. This article has supplementary material online.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

© 2014, Springer-Verlag Berlin Heidelberg.The frequency and severity of extreme events are tightly associated with the variance of precipitation. As climate warms, the acceleration in hydrological cycle is likely to enhance the variance of precipitation across the globe. However, due to the lack of an effective analysis method, the mechanisms responsible for the changes of precipitation variance are poorly understood, especially on regional scales. Our study fills this gap by formulating a variance partition algorithm, which explicitly quantifies the contributions of atmospheric thermodynamics (specific humidity) and dynamics (wind) to the changes in regional-scale precipitation variance. Taking Southeastern (SE) United States (US) summer precipitation as an example, the algorithm is applied to the simulations of current and future climate by phase 5 of Coupled Model Intercomparison Project (CMIP5) models. The analysis suggests that compared to observations, most CMIP5 models (~60 %) tend to underestimate the summer precipitation variance over the SE US during the 1950–1999, primarily due to the errors in the modeled dynamic processes (i.e. large-scale circulation). Among the 18 CMIP5 models analyzed in this study, six of them reasonably simulate SE US summer precipitation variance in the twentieth century and the underlying physical processes; these models are thus applied for mechanistic study of future changes in SE US summer precipitation variance. In the future, the six models collectively project an intensification of SE US summer precipitation variance, resulting from the combined effects of atmospheric thermodynamics and dynamics. Between them, the latter plays a more important role. Specifically, thermodynamics results in more frequent and intensified wet summers, but does not contribute to the projected increase in the frequency and intensity of dry summers. In contrast, atmospheric dynamics explains the projected enhancement in both wet and dry summers, indicating its importance in understanding future climate change over the SE US. The results suggest that the intensified SE US summer precipitation variance is not a purely thermodynamic response to greenhouse gases forcing, and cannot be explained without the contribution of atmospheric dynamics. Our analysis provides important insights to understand the mechanisms of SE US summer precipitation variance change. The algorithm formulated in this study can be easily applied to other regions and seasons to systematically explore the mechanisms responsible for the changes in precipitation extremes in a warming climate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computer Aided Parallelisation Tools (CAPTools) is a toolkit designed to automate as much as possible of the process of parallelising scalar FORTRAN 77 codes. The toolkit combines a very powerful dependence analysis together with user supplied knowledge to build an extremely comprehensive and accurate dependence graph. The initial version has been targeted at structured mesh computational mechanics codes (eg. heat transfer, Computational Fluid Dynamics (CFD)) and the associated simple mesh decomposition paradigm is utilised in the automatic code partition, execution control mask generation and communication call insertion. In this, the first of a series of papers [1–3] the authors discuss the parallelisations of a number of case study codes showing how the various component tools may be used to develop a highly efficient parallel implementation in a few hours or days. The details of the parallelisation of the TEAMKE1 CFD code are described together with the results of three other numerical codes. The resulting parallel implementations are then tested on workstation clusters using PVM and an i860-based parallel system showing efficiencies well over 80%.