961 resultados para Large Marangoni Number
Resumo:
Complex networks have recently attracted a significant amount of research attention due to their ability to model real world phenomena. One important problem often encountered is to limit diffusive processes spread over the network, for example mitigating pandemic disease or computer virus spread. A number of problem formulations have been proposed that aim to solve such problems based on desired network characteristics, such as maintaining the largest network component after node removal. The recently formulated critical node detection problem aims to remove a small subset of vertices from the network such that the residual network has minimum pairwise connectivity. Unfortunately, the problem is NP-hard and also the number of constraints is cubic in number of vertices, making very large scale problems impossible to solve with traditional mathematical programming techniques. Even many approximation algorithm strategies such as dynamic programming, evolutionary algorithms, etc. all are unusable for networks that contain thousands to millions of vertices. A computationally efficient and simple approach is required in such circumstances, but none currently exist. In this thesis, such an algorithm is proposed. The methodology is based on a depth-first search traversal of the network, and a specially designed ranking function that considers information local to each vertex. Due to the variety of network structures, a number of characteristics must be taken into consideration and combined into a single rank that measures the utility of removing each vertex. Since removing a vertex in sequential fashion impacts the network structure, an efficient post-processing algorithm is also proposed to quickly re-rank vertices. Experiments on a range of common complex network models with varying number of vertices are considered, in addition to real world networks. The proposed algorithm, DFSH, is shown to be highly competitive and often outperforms existing strategies such as Google PageRank for minimizing pairwise connectivity.
Resumo:
In this thesis we study the properties of two large dynamic networks, the competition network of advertisers on the Google and Bing search engines and the dynamic network of friend relationships among avatars in the massively multiplayer online game (MMOG) Planetside 2. We are particularly interested in removal patterns in these networks. Our main finding is that in both of these networks the nodes which are most commonly removed are minor near isolated nodes. We also investigate the process of merging of two large networks using data captured during the merger of servers of Planetside 2. We found that the original network structures do not really merge but rather they get gradually replaced by newcomers not associated with the original structures. In the final part of the thesis we investigate the concept of motifs in the Barabási-Albert random graph. We establish some bounds on the number of motifs in this graph.
Resumo:
Many real-world optimization problems contain multiple (often conflicting) goals to be optimized concurrently, commonly referred to as multi-objective problems (MOPs). Over the past few decades, a plethora of multi-objective algorithms have been proposed, often tested on MOPs possessing two or three objectives. Unfortunately, when tasked with solving MOPs with four or more objectives, referred to as many-objective problems (MaOPs), a large majority of optimizers experience significant performance degradation. The downfall of these optimizers is that simultaneously maintaining a well-spread set of solutions along with appropriate selection pressure to converge becomes difficult as the number of objectives increase. This difficulty is further compounded for large-scale MaOPs, i.e., MaOPs possessing large amounts of decision variables. In this thesis, we explore the challenges of many-objective optimization and propose three new promising algorithms designed to efficiently solve MaOPs. Experimental results demonstrate the proposed optimizers to perform very well, often outperforming state-of-the-art many-objective algorithms.
Resumo:
Gowers, dans son article sur les matrices quasi-aléatoires, étudie la question, posée par Babai et Sos, de l'existence d'une constante $c>0$ telle que tout groupe fini possède un sous-ensemble sans produit de taille supérieure ou égale a $c|G|$. En prouvant que, pour tout nombre premier $p$ assez grand, le groupe $PSL_2(\mathbb{F}_p)$ (d'ordre noté $n$) ne posséde aucun sous-ensemble sans produit de taille $c n^{8/9}$, il y répond par la négative. Nous allons considérer le probléme dans le cas des groupes compacts finis, et plus particuliérement des groupes profinis $SL_k(\mathbb{Z}_p)$ et $Sp_{2k}(\mathbb{Z}_p)$. La premiére partie de cette thése est dédiée à l'obtention de bornes inférieures et supérieures exponentielles pour la mesure suprémale des ensembles sans produit. La preuve nécessite d'établir préalablement une borne inférieure sur la dimension des représentations non-triviales des groupes finis $SL_k(\mathbb{Z}/(p^n\mathbb{Z}))$ et $Sp_{2k}(\mathbb{Z}/(p^n\mathbb{Z}))$. Notre théoréme prolonge le travail de Landazuri et Seitz, qui considérent le degré minimal des représentations pour les groupes de Chevalley sur les corps finis, tout en offrant une preuve plus simple que la leur. La seconde partie de la thése à trait à la théorie algébrique des nombres. Un polynome monogéne $f$ est un polynome unitaire irréductible à coefficients entiers qui endengre un corps de nombres monogéne. Pour un nombre premier $q$ donné, nous allons montrer, en utilisant le théoréme de densité de Tchebotariov, que la densité des nombres premiers $p$ tels que $t^q -p$ soit monogéne est supérieure ou égale à $(q-1)/q$. Nous allons également démontrer que, quand $q=3$, la densité des nombres premiers $p$ tels que $\mathbb{Q}(\sqrt[3]{p})$ soit non monogéne est supérieure ou égale à $1/9$.
Resumo:
Large scale image mosaicing methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that lowcost Remotely operated vehicles (ROVs) usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predetermined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This thesis presents a set of consistent methods aimed at creating large area image mosaics from optical data obtained during surveys with low-cost underwater vehicles. First, a global alignment method developed within a Feature-based image mosaicing (FIM) framework, where nonlinear minimisation is substituted by two linear steps, is discussed. Then, a simple four-point mosaic rectifying method is proposed to reduce distortions that might occur due to lens distortions, error accumulation and the difficulties of optical imaging in an underwater medium. The topology estimation problem is addressed by means of an augmented state and extended Kalman filter combined framework, aimed at minimising the total number of matching attempts and simultaneously obtaining the best possible trajectory. Potential image pairs are predicted by taking into account the uncertainty in the trajectory. The contribution of matching an image pair is investigated using information theory principles. Lastly, a different solution to the topology estimation problem is proposed in a bundle adjustment framework. Innovative aspects include the use of fast image similarity criterion combined with a Minimum spanning tree (MST) solution, to obtain a tentative topology. This topology is improved by attempting image matching with the pairs for which there is the most overlap evidence. Unlike previous approaches for large-area mosaicing, our framework is able to deal naturally with cases where time-consecutive images cannot be matched successfully, such as completely unordered sets. Finally, the efficiency of the proposed methods is discussed and a comparison made with other state-of-the-art approaches, using a series of challenging datasets in underwater scenarios
Resumo:
Recent numerical experiments have demonstrated that the state of the stratosphere has a dynamical impact on the state of the troposphere. To account for such an effect, a number of mechanisms have been proposed in the literature, all of which amount to a large-scale adjustment of the troposphere to potential vorticity (PV) anomalies in the stratosphere. This paper analyses whether a simple PV adjustment suffices to explain the actual dynamical response of the troposphere to the state of the stratosphere, the actual response being determined by ensembles of numerical experiments run with an atmospheric general-circulation model. For this purpose, a new PV inverter is developed. It is shown that the simple PV adjustment hypothesis is inadequate. PV anomalies in the stratosphere induce, by inversion, flow anomalies in the troposphere that do not coincide spatially with the tropospheric changes determined by the numerical experiments. Moreover, the tropospheric anomalies induced by PV inversion are on a larger scale than the changes found in the numerical experiments, which are linked to the Atlantic and Pacific storm-tracks. These findings imply that the impact of the stratospheric state on the troposphere is manifested through the impact on individual synoptic-scale systems and their self-organization in the storm-tracks. Changes in these weather systems in the troposphere are not merely synoptic-scale noise on a larger scale tropospheric response, but an integral part of the mechanism by which the state of the stratosphere impacts that of the troposphere.
Resumo:
This paper considers the potential contribution of secondary quantitative analyses of large scale surveys to the investigation of 'other' childhoods. Exploring other childhoods involves investigating the experience of young people who are unequally positioned in relation to multiple, embodied, identity locations, such as (dis)ability, 'class', gender, sexuality, ethnicity and race. Despite some possible advantages of utilising extensive databases, the paper outlines a number of methodological problems with existing surveys which tend to reinforce adultist and broader hierarchical social relations. It is contended that scholars of children's geographies could overcome some of these problematic aspects of secondary data sources by endeavouring to transform the research relations of large scale surveys. Such endeavours would present new theoretical, ethical and methodological complexities, which are briefly considered.
Resumo:
The rheological properties of dough and gluten are important for end-use quality of flour but there is a lack of knowledge of the relationships between fundamental and empirical tests and how they relate to flour composition and gluten quality. Dough and gluten from six breadmaking wheat qualities were subjected to a range of rheological tests. Fundamental (small-deformation) rheological characterizations (dynamic oscillatory shear and creep recovery) were performed on gluten to avoid the nonlinear influence of the starch component, whereas large deformation tests were conducted on both dough and gluten. A number of variables from the various curves were considered and subjected to a principal component analysis (PCA) to get an overview of relationships between the various variables. The first component represented variability in protein quality, associated with elasticity and tenacity in large deformation (large positive loadings for resistance to extension and initial slope of dough and gluten extension curves recorded by the SMS/Kieffer dough and gluten extensibility rig, and the tenacity and strain hardening index of dough measured by the Dobraszczyk/Roberts dough inflation system), the elastic character of the hydrated gluten proteins (large positive loading for elastic modulus [G'], large negative loadings for tan delta and steady state compliance [J(e)(0)]), the presence of high molecular weight glutenin subunits (HMW-GS) 5+10 vs. 2+12, and a size distribution of glutenin polymers shifted toward the high-end range. The second principal component was associated with flour protein content. Certain rheological data were influenced by protein content in addition to protein quality (area under dough extension curves and dough inflation curves [W]). The approach made it possible to bridge the gap between fundamental rheological properties, empirical measurements of physical properties, protein composition, and size distribution. The interpretation of this study gave indications of the molecular basis for differences in breadmaking performance.
Resumo:
Three large deformation rheological tests, the Kieffer dough extensibility system, the D/R dough inflation system and the 2 g mixograph test, were carried out on doughs made from a large number of winter wheat lines and cultivars grown in Poland. These lines and cultivars represented a broad spread in baking performance in order to assess their suitability as predictors of baking volume. The parameters most closely associated with baking volume were strain hardening index, bubble failure strain, and mixograph bandwidth at 10min. Simple correlations with baking volume indicate that bubble failure strain and strain hardening index give the highest correlations, whilst the use of best subsets regression, which selects the best combination of parameters, gave increased correlations with R-2 = 0.865 for dough inflation parameters, R-2 = 0. 842 for Kieffer parameters and R-2 = 0.760 for mixograph parameters. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Background: The large-scale production of G-protein coupled receptors (GPCRs) for functional and structural studies remains a challenge. Recent successes have been made in the expression of a range of GPCRs using Pichia pastoris as an expression host. P. pastoris has a number of advantages over other expression systems including ability to post-translationally modify expressed proteins, relative low cost for production and ability to grow to very high cell densities. Several previous studies have described the expression of GPCRs in P. pastoris using shaker flasks, which allow culturing of small volumes (500 ml) with moderate cell densities (OD600 similar to 15). The use of bioreactors, which allow straightforward culturing of large volumes, together with optimal control of growth parameters including pH and dissolved oxygen to maximise cell densities and expression of the target receptors, are an attractive alternative. The aim of this study was to compare the levels of expression of the human Adenosine 2A receptor (A(2A)R) in P. pastoris under control of a methanol-inducible promoter in both flask and bioreactor cultures. Results: Bioreactor cultures yielded an approximately five times increase in cell density (OD600 similar to 75) compared to flask cultures prior to induction and a doubling in functional expression level per mg of membrane protein, representing a significant optimisation. Furthermore, analysis of a C-terminally truncated A2AR, terminating at residue V334 yielded the highest levels (200 pmol/mg) so far reported for expression of this receptor in P. pastoris. This truncated form of the receptor was also revealed to be resistant to C-terminal degradation in contrast to the WT A(2A)R, and therefore more suitable for further functional and structural studies. Conclusion: Large-scale expression of the A(2A)R in P. pastoris bioreactor cultures results in significant increases in functional expression compared to traditional flask cultures.
Resumo:
In this paper we deal with performance analysis of Monte Carlo algorithm for large linear algebra problems. We consider applicability and efficiency of the Markov chain Monte Carlo for large problems, i.e., problems involving matrices with a number of non-zero elements ranging between one million and one billion. We are concentrating on analysis of the almost Optimal Monte Carlo (MAO) algorithm for evaluating bilinear forms of matrix powers since they form the so-called Krylov subspaces. Results are presented comparing the performance of the Robust and Non-robust Monte Carlo algorithms. The algorithms are tested on large dense matrices as well as on large unstructured sparse matrices.
Resumo:
Where users are interacting in a distributed virtual environment, the actions of each user must be observed by peers with sufficient consistency and within a limited delay so as not to be detrimental to the interaction. The consistency control issue may be split into three parts: update control; consistent enactment and evolution of events; and causal consistency. The delay in the presentation of events, termed latency, is primarily dependent on the network propagation delay and the consistency control algorithms. The latency induced by the consistency control algorithm, in particular causal ordering, is proportional to the number of participants. This paper describes how the effect of network delays may be reduced and introduces a scalable solution that provides sufficient consistency control while minimising its effect on latency. The principles described have been developed at Reading over the past five years. Similar principles are now emerging in the simulation community through the HLA standard. This paper attempts to validate the suggested principles within the schema of distributed simulation and virtual environments and to compare and contrast with those described by the HLA definition documents.
Resumo:
An account is given of a number of recent studies with idealised models whose aim is to further understanding of the large-scale tropical atmospheric circulation. Initial-value integrations with a model with imposed heating are used to discuss aspects of the Asian summer monsoon, including constraints on cross-equatorial flow into the monsoon. The summer descent in the Mediterranean region and on the eastern sides of the summer subtropical anticyclones are seen to be associated with the monsoons to their east. An aqua-planet GCM is used to investigate the relationship between simple SST distributions and tropical convection and circulation. The existence of strong equatorial convection and Hadley cells is found to depend sensitively on the curvature of the meridional profile in SST. Zonally confined SST maxima produce convective maxima centred to the west and suppression of convection elsewhere. Strong equatorial zonal flow changes are found in some experiments and three mechanisms for producing these are investigated in a model with imposed heating. 1.
Resumo:
Although current research indicates that increasing the number of options has negative effects on the cognitive ability of consumers, little understanding has been given to the consequences on producers and their strategic behavior. This article tests whether a large portfolio of products is beneficial to producers by observing UK consumer response to price promotions. The article shows that discounts induce mainly segment switching (74% of the total impact), with a limited effect on stockpiling (26%) and no impact on purchase incidence. Consequently, consumers prefer to “follow the discount” rather than purchase multiple units of the same wine. This result seems to explain the current structure of the market, and suggests that discounts may conflict with segment loyalty, a situation that disfavors producers, particularly in very populated segments. Results also casts doubts on the economic sustainability of competition based on an intense product differentiation in the wine sector.
Resumo:
Atmospheric Rivers (ARs), narrow plumes of enhanced moisture transport in the lower troposphere, are a key synoptic feature behind winter flooding in midlatitude regions. This article develops an algorithm which uses the spatial and temporal extent of the vertically integrated horizontal water vapor transport for the detection of persistent ARs (lasting 18 h or longer) in five atmospheric reanalysis products. Applying the algorithm to the different reanalyses in the vicinity of Great Britain during the winter half-years of 1980–2010 (31 years) demonstrates generally good agreement of AR occurrence between the products. The relationship between persistent AR occurrences and winter floods is demonstrated using winter peaks-over-threshold (POT) floods (with on average one flood peak per winter). In the nine study basins, the number of winter POT-1 floods associated with persistent ARs ranged from approximately 40 to 80%. A Poisson regression model was used to describe the relationship between the number of ARs in the winter half-years and the large-scale climate variability. A significant negative dependence was found between AR totals and the Scandinavian Pattern (SCP), with a greater frequency of ARs associated with lower SCP values.