71 resultados para Modularité massive
Resumo:
Conventional encryption techniques are usually applicable for text data and often unsuited for encrypting multimedia objects for two reasons. Firstly, the huge sizes associated with multimedia objects make conventional encryption computationally costly. Secondly, multimedia objects come with massive redundancies which are useful in avoiding encryption of the objects in their entirety. Hence a class of encryption techniques devoted to encrypting multimedia objects like images have been developed. These techniques make use of the fact that the data comprising multimedia objects like images could in general be seggregated into two disjoint components, namely salient and non-salient. While the former component contributes to the perceptual quality of the object, the latter only adds minor details to it. In the context of images, the salient component is often much smaller in size than the non-salient component. Encryption effort is considerably reduced if only the salient component is encrypted while leaving the other component unencrypted. A key challenge is to find means to achieve a desirable seggregation so that the unencrypted component does not reveal any information about the object itself. In this study, an image encryption approach that uses fractal structures known as space-filling curves- in order to reduce the encryption overload is presented. In addition, the approach also enables a high quality lossy compression of images.
Resumo:
We present global multidimensional numerical simulations of the plasma that pervades the dark matter haloes of clusters, groups and massive galaxies (the intracluster medium; ICM). Observations of clusters and groups imply that such haloes are roughly in global thermal equilibrium, with heating balancing cooling when averaged over sufficiently long time- and length-scales; the ICM is, however, very likely to be locally thermally unstable. Using simple observationally motivated heating prescriptions, we show that local thermal instability (TI) can produce a multiphase medium with similar to 104 K cold filaments condensing out of the hot ICM only when the ratio of the TI time-scale in the hot plasma (tTI) to the free-fall time-scale (tff) satisfies tTI/tff? 10. This criterion quantitatively explains why cold gas and star formation are preferentially observed in low-entropy clusters and groups. In addition, the interplay among heating, cooling and TI reduces the net cooling rate and the mass accretion rate at small radii by factors of similar to 100 relative to cooling-flow models. This dramatic reduction is in line with observations. The feedback efficiency required to prevent a cooling flow is similar to 10-3 for clusters and decreases for lower mass haloes; supernova heating may be energetically sufficient to balance cooling in galactic haloes. We further argue that the ICM self-adjusts so that tTI/tff? 10 at all radii. When this criterion is not satisfied, cold filaments condense out of the hot phase and reduce the density of the ICM. These cold filaments can power the black hole and/or stellar feedback required for global thermal balance, which drives tTI/tff? 10. In comparison to clusters, groups have central cores with lower densities and larger radii. This can account for the deviations from self-similarity in the X-ray luminositytemperature () relation. The high-velocity clouds observed in the Galactic halo can be due to local TI producing multiphase gas close to the virial radius if the density of the hot plasma in the Galactic halo is >rsim 10-5 cm-3 at large radii.
Resumo:
We consider counterterms for odd dimensional holographic conformal field theories (CFTs). These counterterms are derived by demanding cutoff independence of the CFT partition function on S-d and S-1 x Sd-1. The same choice of counterterms leads to a cutoff independent Schwarzschild black hole entropy. When treated as independent actions, these counterterm actions resemble critical theories of gravity, i.e., higher curvature gravity theories where the additional massive spin-2 modes become massless. Equivalently, in the context of AdS/CFT, these are theories where at least one of the central charges associated with the trace anomaly vanishes. Connections between these theories and logarithmic CFTs are discussed. For a specific choice of parameters, the theories arising from counterterms are nondynamical and resemble a Dirac-Born-Infeld generalization of gravity. For even dimensional CFTs, analogous counterterms cancel log-independent cutoff dependence.
Resumo:
We study the linear m= 1 counter-rotating instability in a two-component, nearly Keplerian disc. Our goal is to understand these slow modes in discs orbiting massive black holes in galactic nuclei. They are of interest not only because they are of large spatial scale and can hence dominate observations but also because they can be growing modes that are readily excited by accretion events. Self-gravity being non-local, the eigenvalue problem results in a pair of coupled integral equations, which we derive for a two-component softened gravity disc. We solve this integral eigenvalue problem numerically for various values of mass fraction in the counter-rotating component. The eigenvalues are in general complex, being real only in the absence of the counter-rotating component, or imaginary when both components have identical surface density profiles. Our main results are as follows: (i) the pattern speed appears to be non-negative, with the growth (or damping) rate being larger for larger values of the pattern speed; (ii) for a given value of the pattern speed, the growth (or damping) rate increases as the mass in the counter-rotating component increases; (iii) the number of nodes of the eigenfunctions decreases with increasing pattern speed and growth rate. Observations of lopsided brightness distributions would then be dominated by modes with the least number of nodes, which also possess the largest pattern speeds and growth rates.
Resumo:
Accurate supersymmetric spectra are required to confront data from direct and indirect searches of supersymmetry. SuSeFLAV is a numerical tool capable of computing supersymmetric spectra precisely for various supersymmetric breaking scenarios applicable even in the presence of flavor violation. The program solves MSSM RGEs with complete 3 x 3 flavor mixing at 2-loop level and one loop finite threshold corrections to all MSSM parameters by incorporating radiative electroweak symmetry breaking conditions. The program also incorporates the Type-I seesaw mechanism with three massive right handed neutrinos at user defined mass scales and mixing. It also computes branching ratios of flavor violating processes such as l(j) -> l(i)gamma, l(j) -> 3 l(i), b -> s gamma and supersymmetric contributions to flavor conserving quantities such as (g(mu) - 2). A large choice of executables suitable for various operations of the program are provided. Program summary Program title: SuSeFLAV Catalogue identifier: AEOD_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEOD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 76552 No. of bytes in distributed program, including test data, etc.: 582787 Distribution format: tar.gz Programming language: Fortran 95. Computer: Personal Computer, Work-Station. Operating system: Linux, Unix. Classification: 11.6. Nature of problem: Determination of masses and mixing of supersymmetric particles within the context of MSSM with conserved R-parity with and without the presence of Type-I seesaw. Inter-generational mixing is considered while calculating the mass spectrum. Supersymmetry breaking parameters are taken as inputs at a high scale specified by the mechanism of supersymmetry breaking. RG equations including full inter-generational mixing are then used to evolve these parameters up to the electroweak breaking scale. The low energy supersymmetric spectrum is calculated at the scale where successful radiative electroweak symmetry breaking occurs. At weak scale standard model fermion masses, gauge couplings are determined including the supersymmetric radiative corrections. Once the spectrum is computed, the program proceeds to various lepton flavor violating observables (e.g., BR(mu -> e gamma), BR(tau -> mu gamma) etc.) at the weak scale. Solution method: Two loop RGEs with full 3 x 3 flavor mixing for all supersymmetry breaking parameters are used to compute the low energy supersymmetric mass spectrum. An adaptive step size Runge-Kutta method is used to solve the RGEs numerically between the high scale and the electroweak breaking scale. Iterative procedure is employed to get the consistent radiative electroweak symmetry breaking condition. The masses of the supersymmetric particles are computed at 1-loop order. The third generation SM particles and the gauge couplings are evaluated at the 1-loop order including supersymmetric corrections. A further iteration of the full program is employed such that the SM masses and couplings are consistent with the supersymmetric particle spectrum. Additional comments: Several executables are presented for the user. Running time: 0.2 s on a Intel(R) Core(TM) i5 CPU 650 with 3.20 GHz. (c) 2012 Elsevier B.V. All rights reserved.
Resumo:
The rapid emergence of infectious diseases calls for immediate attention to determine practical solutions for intervention strategies. To this end, it becomes necessary to obtain a holistic view of the complex hostpathogen interactome. Advances in omics and related technology have resulted in massive generation of data for the interacting systems at unprecedented levels of detail. Systems-level studies with the aid of mathematical tools contribute to a deeper understanding of biological systems, where intuitive reasoning alone does not suffice. In this review, we discuss different aspects of hostpathogen interactions (HPIs) and the available data resources and tools used to study them. We discuss in detail models of HPIs at various levels of abstraction, along with their applications and limitations. We also enlist a few case studies, which incorporate different modeling approaches, providing significant insights into disease. (c) 2013 Wiley Periodicals, Inc.
Resumo:
Chebyshev-inequality-based convex relaxations of Chance-Constrained Programs (CCPs) are shown to be useful for learning classifiers on massive datasets. In particular, an algorithm that integrates efficient clustering procedures and CCP approaches for computing classifiers on large datasets is proposed. The key idea is to identify high density regions or clusters from individual class conditional densities and then use a CCP formulation to learn a classifier on the clusters. The CCP formulation ensures that most of the data points in a cluster are correctly classified by employing a Chebyshev-inequality-based convex relaxation. This relaxation is heavily dependent on the second-order statistics. However, this formulation and in general such relaxations that depend on the second-order moments are susceptible to moment estimation errors. One of the contributions of the paper is to propose several formulations that are robust to such errors. In particular a generic way of making such formulations robust to moment estimation errors is illustrated using two novel confidence sets. An important contribution is to show that when either of the confidence sets is employed, for the special case of a spherical normal distribution of clusters, the robust variant of the formulation can be posed as a second-order cone program. Empirical results show that the robust formulations achieve accuracies comparable to that with true moments, even when moment estimates are erroneous. Results also illustrate the benefits of employing the proposed methodology for robust classification of large-scale datasets.
Resumo:
Stellar mass black holes (SMBHs), forming by the core collapse of very massive, rapidly rotating stars, are expected to exhibit a high density accretion disk around them developed from the spinning mantle of the collapsing star. A wide class of such disks, due to their high density and temperature, are effective emitters of neutrinos and hence called neutrino cooled disks. Tracking the physics relating the observed (neutrino) luminosity to the mass, spin of black holes (BHs) and the accretion rate ((M) over dot) of such disks, here we establish a correlation between the spin and mass of SMBHs at their formation stage. Our work shows that spinning BHs are more massive than nonspinning BHs for a given (M) over dot. However, slowly spinning BHs can turn out to be more massive than spinning BHs if (M) over dot at their formation stage was higher compared to faster spinning BHs.
Resumo:
We investigate nucleosynthesis inside the gamma-ray burst (GRB) accretion disks formed by the Type II collapsars. In these collapsars, the core collapse of massive stars first leads to the formation of a proto-neutron star. After that, an outward moving shock triggers a successful supernova. However, the supernova ejecta lacks momentum and within a few seconds the newly formed neutron star gets transformed to a stellar mass black hole via massive fallback. The hydrodynamics of such an accretion disk formed from the fallback material of the supernova ejecta has been studied extensively in the past. We use these well-established hydrodynamic models for our accretion disk in order to understand nucleosynthesis, which is mainly advection dominated in the outer regions. Neutrino cooling becomes important in the inner disk where the temperature and density are higher. The higher the accretion rate (M) over dot is, the higher the density and temperature are in the disks. We deal with accretion disks with relatively low accretion rates: 0.001 M-circle dot s(-1) less than or similar to (M) over dot less than or similar to 0.01 M-circle dot s(-1) and hence these disks are predominantly advection dominated. We use He-rich and Si-rich abundances as the initial condition of nucleosynthesis at the outer disk, and being equipped with the disk hydrodynamics and the nuclear network code, we study the abundance evolution as matter inflows and falls into the central object. We investigate the variation in the nucleosynthesis products in the disk with the change in the initial abundance at the outer disk and also with the change in the mass accretion rate. We report the synthesis of several unusual nuclei like P-31, K-39, Sc-43, Cl-35 and various isotopes of titanium, vanadium, chromium, manganese and copper. We also confirm that isotopes of iron, cobalt, nickel, argon, calcium, sulphur and silicon get synthesized in the disk, as shown by previous authors. Much of these heavy elements thus synthesized are ejected from the disk via outflows and hence they should leave their signature in observed data.
Resumo:
We study the structure constants of the N = 1 beta deformed theory perturbatively and at strong coupling. We show that the planar one loop corrections to the structure constants of single trace gauge invariant operators in the scalar sector is determined by the anomalous dimension Hamiltonian. This result implies that 3 point functions of the chiral primaries of the theory do not receive corrections at one loop. We then study the structure constants at strong coupling using the Lunin-Maldacena geometry. We explicitly construct the supergravity mode dual to the chiral primary with three equal U(1) R-charges in the Lunin-Maldacena geometry. We show that the 3 point function of this supergravity mode with semi-classical states representing two other similar chiral primary states but with large U(1) charges to be independent of the beta deformation and identical to that found in the AdS(5) x S-5 geometry. This together with the one-loop result indicate that these structure constants are protected by a non-renormalization theorem. We also show that three point function of U(1) R-currents with classical massive strings is proportional to the R-charge carried by the string solution. This is in accordance with the prediction of the R-symmetry Ward identity.
Resumo:
Moore's Law has driven the semiconductor revolution enabling over four decades of scaling in frequency, size, complexity, and power. However, the limits of physics are preventing further scaling of speed, forcing a paradigm shift towards multicore computing and parallelization. In effect, the system is taking over the role that the single CPU was playing: high-speed signals running through chips but also packages and boards connect ever more complex systems. High-speed signals making their way through the entire system cause new challenges in the design of computing hardware. Inductance, phase shifts and velocity of light effects, material resonances, and wave behavior become not only prevalent but need to be calculated accurately and rapidly to enable short design cycle times. In essence, to continue scaling with Moore's Law requires the incorporation of Maxwell's equations in the design process. Incorporating Maxwell's equations into the design flow is only possible through the combined power that new algorithms, parallelization and high-speed computing provide. At the same time, incorporation of Maxwell-based models into circuit and system-level simulation presents a massive accuracy, passivity, and scalability challenge. In this tutorial, we navigate through the often confusing terminology and concepts behind field solvers, show how advances in field solvers enable integration into EDA flows, present novel methods for model generation and passivity assurance in large systems, and demonstrate the power of cloud computing in enabling the next generation of scalable Maxwell solvers and the next generation of Moore's Law scaling of systems. We intend to show the truly symbiotic growing relationship between Maxwell and Moore!
Resumo:
We investigate nucleosynthesis inside the outflows from gamma-ray burst (GRB) accretion disks formed by the Type II collapsars. In these collapsars, massive stars undergo core collapse to form a proto-neutron star initially, and a mild supernova (SN) explosion is driven. The SN ejecta lack momentum, and subsequently this newly formed neutron star gets transformed to a stellar mass black hole via massive fallback. The hydrodynamics and the nucleosynthesis in these accretion disks have been studied extensively in the past. Several heavy elements are synthesized in the disk, and much of these heavy elements are ejected from the disk via winds and outflows. We study nucleosynthesis in the outflows launched from these disks by using an adiabatic, spherically expanding outflow model, to understand which of these elements thus synthesized in the disk survive in the outflow. While studying this, we find that many new elements like isotopes of titanium, copper, zinc, etc., are present in the outflows. Ni-56 is abundantly synthesized in most of the cases in the outflow, which implies that the outflows from these disks in a majority of cases will lead to an observable SN explosion. It is mainly present when outflow is considered from the He-rich, Ni-56/Fe-54-rich zones of the disks. However, outflow from the Si-rich zone of the disk remains rich in silicon. Although emission lines of many of these heavy elements have been observed in the X-ray afterglows of several GRBs by Chandra, BeppoSAX, XMM-Newton, etc., Swift seems to have not yet detected these lines.
Resumo:
As an alternative to the gold standard TiO2 photocatalyst, the use of zinc oxide (ZnO) as a robust candidate for wastewater treatment is widespread due to its similarity in charge carrier dynamics upon bandgap excitation and the generation of reactive oxygen species in aqueous suspensions with TiO2. However, the large bandgap of ZnO, the massive charge carrier recombination, and the photoinduced corrosion-dissolution at extreme pH conditions, together with the formation of inert Zn(OH)(2) during photocatalytic reactions act as barriers for its extensive applicability. To this end, research has been intensified to improve the performance of ZnO by tailoring its surface-bulk structure and by altering its photogenerated charge transfer pathways with an intention to inhibit the surface-bulk charge carrier recombination. For the first time, the several strategies, such as tailoring the intrinsic defects, surface modification with organic compounds, doping with foreign ions, noble metal deposition, heterostructuring with other semiconductors and modification with carbon nanostructures, which have been successfully employed to improve the photoactivity and stability of ZnO are critically reviewed. Such modifications enhance the charge separation and facilitate the generation of reactive oxygenated free radicals, and also the interaction with the pollutant molecules. The synthetic route to obtain hierarchical nanostructured morphologies and study their impact on the photocatalytic performance is explained by considering the morphological influence and the defect-rich chemistry of ZnO. Finally, the crystal facet engineering of polar and non-polar facets and their relevance in photocatalysis is outlined. It is with this intention that the present review directs the further design, tailoring and tuning of the physico-chemical and optoelectronic properties of ZnO for better applications, ranging from photocatalysis to photovoltaics.
Resumo:
We present deep Washington photometry of 45 poorly populated star cluster candidates in the Large Magellanic Cloud (LMC). We have performed a systematic study to estimate the parameters of the cluster candidates by matching theoretical isochrones to the cleaned and dereddened cluster color-magnitude diagrams. We were able to estimate the basic parameters for 33 clusters, out of which 23 are identified as single clusters and 10 are found to be members of double clusters. The other 12 cluster candidates have been classified as possible clusters/asterisms. About 50% of the true clusters are in the 100-300 Myr age range, whereas some are older or younger. We have discussed the distribution of age, location, and reddening with respect to field, as well as the size of true clusters. The sizes and masses of the studied sample are found to be similar to that of open clusters in the Milky Way. Our study adds to the lower end of cluster mass distribution in the LMC, suggesting that the LMC, apart from hosting rich clusters, also has formed small, less massive open clusters in the 100-300 Myr age range.
Resumo:
In the immediate surroundings of our daily life, we can find a lot of places where the energy in the form of vibration is being wasted. Therefore, we have enormous opportunities to utilize the same. Piezoelectric character of matter enables us to convert this mechanical vibration energy into electrical energy which can be stored and used to power other device, instead of being wasted. This work is done to realize both actuator and sensor in a cantilever beam based on piezoelectricity. The sensor part is called vibration energy harvester. The numerical analyses were performed for the cantilever beam using the commercial package ANSYS and MATLAB. The cantilever beam is realized by taking a plate and fixing its one end between two massive plates. Two PZT patches were glued to the beam on its two faces. Experiments were performed using data acquisition system (DAQ) and LABVIEW software for actuating and sensing the vibration of the cantilever beam.