63 resultados para hut foundations
Resumo:
The study of reaction mechanisms involves systematic investigations of the correlation between structure, reactivity, and time. The challenge is to be able to observe the chemical changes undergone by reactants as they change into products via one or several intermediates such as electronic excited states (singlet and triplet), radicals, radical ions, carbocations, carbanions, carbenes, nitrenes, nitrinium ions, etc. The vast array of intermediates and timescales means there is no single ``do-it-all'' technique. The simultaneous advances in contemporary time-resolved Raman spectroscopic techniques and computational methods have done much towards visualizing molecular fingerprint snapshots of the reactive intermediates in the microsecond to femtosecond time domain. Raman spectroscopy and its sensitive counterpart resonance Raman spectroscopy have been well proven as means for determining molecular structure, chemical bonding, reactivity, and dynamics of short-lived intermediates in solution phase and are advantageous in comparison to commonly used time-resolved absorption and emission spectroscopy. Today time-resolved Raman spectroscopy is a mature technique; its development owes much to the advent of pulsed tunable lasers, highly efficient spectrometers, and high speed, highly sensitive multichannel detectors able to collect a complete spectrum. This review article will provide a brief chronological development of the experimental setup and demonstrate how experimentalists have conquered numerous challenges to obtain background-free (removing fluorescence), intense, and highly spectrally resolved Raman spectra in the nanosecond to microsecond (ns-mu s) and picosecond (ps) time domains and, perhaps surprisingly, laid the foundations for new techniques such as spatially offset Raman spectroscopy.
Resumo:
We consider a general class of timed automata parameterized by a set of “input-determined” operators, in a continuous time setting. We show that for any such set of operators, we have a monadic second order logic characterization of the class of timed languages accepted by the corresponding class of automata. Further, we consider natural timed temporal logics based on these operators, and show that they are expressively equivalent to the first-order fragment of the corresponding MSO logics. As a corollary of these general results we obtain an expressive completeness result for the continuous version of MTL.
Resumo:
A $k$-box $B=(R_1,...,R_k)$, where each $R_i$ is a closed interval on the real line, is defined to be the Cartesian product $R_1\times R_2\times ...\times R_k$. If each $R_i$ is a unit length interval, we call $B$ a $k$-cube. Boxicity of a graph $G$, denoted as $\boxi(G)$, is the minimum integer $k$ such that $G$ is an intersection graph of $k$-boxes. Similarly, the cubicity of $G$, denoted as $\cubi(G)$, is the minimum integer $k$ such that $G$ is an intersection graph of $k$-cubes. It was shown in [L. Sunil Chandran, Mathew C. Francis, and Naveen Sivadasan: Representing graphs as the intersection of axis-parallel cubes. MCDES-2008, IISc Centenary Conference, available at CoRR, abs/cs/ 0607092, 2006.] that, for a graph $G$ with maximum degree $\Delta$, $\cubi(G)\leq \lceil 4(\Delta +1)\log n\rceil$. In this paper, we show that, for a $k$-degenerate graph $G$, $\cubi(G) \leq (k+2) \lceil 2e \log n \rceil$. Since $k$ is at most $\Delta$ and can be much lower, this clearly is a stronger result. This bound is tight. We also give an efficient deterministic algorithm that runs in $O(n^2k)$ time to output a $8k(\lceil 2.42 \log n\rceil + 1)$ dimensional cube representation for $G$. An important consequence of the above result is that if the crossing number of a graph $G$ is $t$, then $\boxi(G)$ is $O(t^{1/4}{\lceil\log t\rceil}^{3/4})$ . This bound is tight up to a factor of $O((\log t)^{1/4})$. We also show that, if $G$ has $n$ vertices, then $\cubi(G)$ is $O(\log n + t^{1/4}\log t)$. Using our bound for the cubicity of $k$-degenerate graphs we show that cubicity of almost all graphs in $\mathcal{G}(n,m)$ model is $O(d_{av}\log n)$, where $d_{av}$ denotes the average degree of the graph under consideration. model is O(davlogn).
Resumo:
A path in an edge colored graph is said to be a rainbow path if no two edges on the path have the same color. An edge colored graph is (strongly) rainbow connected if there exists a (geodesic) rainbow path between every pair of vertices. The (strong) rainbow connectivity of a graph G, denoted by (src(G), respectively) rc(G) is the smallest number of colors required to edge color the graph such that G is (strongly) rainbow connected. In this paper we study the rainbow connectivity problem and the strong rainbow connectivity problem from a computational point of view. Our main results can be summarised as below: 1) For every fixed k >= 3, it is NP-Complete to decide whether src(G) <= k even when the graph G is bipartite. 2) For every fixed odd k >= 3, it is NP-Complete to decide whether rc(G) <= k. This resolves one of the open problems posed by Chakraborty et al. (J. Comb. Opt., 2011) where they prove the hardness for the even case. 3) The following problem is fixed parameter tractable: Given a graph G, determine the maximum number of pairs of vertices that can be rainbow connected using two colors. 4) For a directed graph G, it is NP-Complete to decide whether rc(G) <= 2.
Resumo:
Recently it has been discovered---contrary to expectations of physicists as well as biologists---that the energy transport during photosynthesis, from the chlorophyll pigment that captures the photon to the reaction centre where glucose is synthesised from carbon dioxide and water, is highly coherent even at ambient temperature and in the cellular environment. This process and the key molecular ingredients that it depends on are described. By looking at the process from the computer science view-point, we can study what has been optimised and how. A spatial search algorithmic model based on robust features of wave dynamics is presented.
Resumo:
Large software systems are developed by composing multiple programs. If the programs manip-ulate and exchange complex data, such as network packets or files, it is essential to establish that they follow compatible data formats. Most of the complexity of data formats is associated with the headers. In this paper, we address compatibility of programs operating over headers of network packets, files, images, etc. As format specifications are rarely available, we infer the format associated with headers by a program as a set of guarded layouts. In terms of these formats, we define and check compatibility of (a) producer-consumer programs and (b) different versions of producer (or consumer) programs. A compatible producer-consumer pair is free of type mismatches and logical incompatibilities such as the consumer rejecting valid outputs gen-erated by the producer. A backward compatible producer (resp. consumer) is guaranteed to be compatible with consumers (resp. producers) that were compatible with its older version. With our prototype tool, we identified 5 known bugs and 1 potential bug in (a) sender-receiver modules of Linux network drivers of 3 vendors and (b) different versions of a TIFF image library.
Resumo:
This paper presents the case history of the construction of a 3 m high embankment on the geocell foundation over the soft settled red mud. Red mud is a waste product from the Bayer process of Aluminum industry. Geotechnical problems of the site, the design of the geocell foundation based on experimental investigation and the construction sequences of the geocell foundations in the field are discussed in the paper. Based on the experimental studies, an analytical model was also developed to estimate the load carrying capacity of the soft clay bed reinforced with geocell and combination of geocell and geogrid. The results of the experimental and analytical studies revealed that the use of combination of geocell and the geogrid is always beneficial than using the geocell alone. Hence, the combination of geocell and geogrid was recommended to stabilize the embankment base. The reported embankment is located in Lanjigharh (Orissa) in India. Construction of the embankment on the geocell foundation has already been completed. The constructed embankmenthas already sustained two monsoon rains without any cracks and seepage. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Sacred groves are patches of forests preserved for their spiritual and religious significance. The practice gained relevance with the spread of agriculture that caused large-scale deforestation affecting biodiversity and watersheds. Sacred groves may lose their prominence nowadays, but are still relevant in Indian rural landscapes inhabited by traditional communities. The recent rise of interest in this tradition encouraged scientific study that despite its pan-Indian distribution, focused on India's northeast, Western Ghats and east coast either for their global/regional importance or unique ecosystems. Most studies focused on flora, mainly angiosperms, and the faunal studies concentrated on vertebrates while lower life forms were grossly neglected. Studies on ecosystem functioning are few although observations are available. Most studies attributed watershed protection values to sacred groves but hardly highlighted hydrological process or water yield in comparison with other land use types. The grove studies require diversification from a stereotyped path and must move towards creating credible scientific foundations for conservation. Documentation should continue in unexplored areas but more work is needed on basic ecological functions and ecosystem dynamics to strengthen planning for scientifically sound sacred grove management.
Resumo:
We investigate the parameterized complexity of the following edge coloring problem motivated by the problem of channel assignment in wireless networks. For an integer q >= 2 and a graph G, the goal is to find a coloring of the edges of G with the maximum number of colors such that every vertex of the graph sees at most q colors. This problem is NP-hard for q >= 2, and has been well-studied from the point of view of approximation. Our main focus is the case when q = 2, which is already theoretically intricate and practically relevant. We show fixed-parameter tractable algorithms for both the standard and the dual parameter, and for the latter problem, the result is based on a linear vertex kernel.
Resumo:
The correlation clustering problem is a fundamental problem in both theory and practice, and it involves identifying clusters of objects in a data set based on their similarity. A traditional modeling of this question as a graph theoretic problem involves associating vertices with data points and indicating similarity by adjacency. Clusters then correspond to cliques in the graph. The resulting optimization problem, Cluster Editing (and several variants) are very well-studied algorithmically. In many situations, however, translating clusters to cliques can be somewhat restrictive. A more flexible notion would be that of a structure where the vertices are mutually ``not too far apart'', without necessarily being adjacent. One such generalization is realized by structures called s-clubs, which are graphs of diameter at most s. In this work, we study the question of finding a set of at most k edges whose removal leaves us with a graph whose components are s-clubs. Recently, it has been shown that unless Exponential Time Hypothesis fail (ETH) fails Cluster Editing (whose components are 1-clubs) does not admit sub-exponential time algorithm STACS, 2013]. That is, there is no algorithm solving the problem in time 2 degrees((k))n(O(1)). However, surprisingly they show that when the number of cliques in the output graph is restricted to d, then the problem can be solved in time O(2(O(root dk)) + m + n). We show that this sub-exponential time algorithm for the fixed number of cliques is rather an exception than a rule. Our first result shows that assuming the ETH, there is no algorithm solving the s-Club Cluster Edge Deletion problem in time 2 degrees((k))n(O(1)). We show, further, that even the problem of deleting edges to obtain a graph with d s-clubs cannot be solved in time 2 degrees((k))n(O)(1) for any fixed s, d >= 2. This is a radical contrast from the situation established for cliques, where sub-exponential algorithms are known.
Resumo:
A method is presented for determining the ultimate bearing capacity of a circular footing reinforced with a horizontal circular sheet of reinforcement placed over granular and cohesive-frictional soils. It was assumed that the reinforcement sheet could bear axial tension but not the bending moment. The analysis was performed based on the lower-bound theorem of the limit analysis in combination with finite elements and linear optimization. The present research is an extension of recent work with strip foundations reinforced with different layers of reinforcement. To incorporate the effect of the reinforcement, the efficiency factors eta(gamma) and eta(c), which need to be multiplied by the bearing capacity factors N-gamma and N-c, were established. Results were obtained for different values of the soil internal friction angle (phi). The optimal positions of the reinforcements, which would lead to a maximum improvement in the bearing capacity, were also determined. The variations of the axial tensile force in the reinforcement sheet at different radial distances from the center were also studied. The results of the analysis were compared with those available from literature. (C) 2014 American Society of Civil Engineers.
Resumo:
Mathematics is beautiful and precise and often necessary to understand complex biological phenomena. And yet biologists cannot always hope to fully understand the mathematical foundations of the theory they are using or testing. How then should biologists behave when mathematicians themselves are in dispute? Using the on-going controversy over Hamilton's rule as an example, I argue that biologists should be free to treat mathematical theory with a healthy dose of agnosticism. In doing so biologists should equip themselves with a disclaimer that publicly admits that they cannot entirely attest to the veracity of the mathematics underlying the theory they are using or testing. The disclaimer will only help if it is accompanied by three responsibilities - stay bipartisan in a dispute among mathematicians, stay vigilant and help expose dissent among mathematicians, and make the biology larger than the mathematics. I must emphasize that my goal here is not to take sides in the on-going dispute over the mathematical validity of Hamilton's rule, indeed my goal is to argue that we should refrain from taking sides.
Resumo:
Health monitoring is an integral part of laboratory animal quality standards. However, current or past prevalence data as well as regulatory requirements dictate the frequency, type and the expanse of health monitoring. In an effort to understand the prevalence of rodent pathogens in India, a preliminary study was carried out by sero-epidemiology. Sera samples obtained from 26 public and private animal facilities were analyzed for the presence of antibodies against minute virus of mice (MVM), ectromelia virus (ECTV), lymphocytic choriomeningitis virus (LCMV), mouse hepatitis virus (MHV), Sendai virus (SeV), and Mycoplasma pulmonis in mice, and SeV, rat parvo virus (RPV), Kilham's rat virus (KRV) and sialodacryoadenitis virus (SDAV) in rats, by sandwich ELISA. It was observed that MHV was the most prevalent agent followed by Mycoplasma pulmonis and MVM in mice, and SDAV followed by RPV were prevalent in rats. On the other hand, none of the samples were positive for ECTV in mice, or SeV or KRV in rats. Multiple infections were common in both mice and rats. The incidence of MHV and Mycoplasma pulmonis was higher in facilities maintained by public organizations than in vivaria of private organizations, although the difference was not statistically different. On the other hand the prevalence of rodent pathogens was significantly higher in the northern part of India than in the South. These studies form the groundwork for detailed sero-prevalence studies which should further lay the foundations for country-specific guidelines for health monitoring of laboratory animals.
Resumo:
The problem of scaling up data integration, such that new sources can be quickly utilized as they are discovered, remains elusive: Global schemas for integrated data are difficult to develop and expand, and schema and record matching techniques are limited by the fact that data and metadata are often under-specified and must be disambiguated by data experts. One promising approach is to avoid using a global schema, and instead to develop keyword search-based data integration-where the system lazily discovers associations enabling it to join together matches to keywords, and return ranked results. The user is expected to understand the data domain and provide feedback about answers' quality. The system generalizes such feedback to learn how to correctly integrate data. A major open challenge is that under this model, the user only sees and offers feedback on a few ``top-'' results: This result set must be carefully selected to include answers of high relevance and answers that are highly informative when feedback is given on them. Existing systems merely focus on predicting relevance, by composing the scores of various schema and record matching algorithms. In this paper, we show how to predict the uncertainty associated with a query result's score, as well as how informative feedback is on a given result. We build upon these foundations to develop an active learning approach to keyword search-based data integration, and we validate the effectiveness of our solution over real data from several very different domains.
Resumo:
This paper presents a lower bound limit analysis approach for solving an axisymmetric stability problem by using the Drucker-Prager (D-P) yield cone in conjunction with finite elements and nonlinear optimization. In principal stress space, the tip of the yield cone has been smoothened by applying the hyperbolic approximation. The nonlinear optimization has been performed by employing an interior point method based on the logarithmic barrier function. A new proposal has also been given to simulate the D-P yield cone with the Mohr-Coulomb hexagonal yield pyramid. For the sake of illustration, bearing capacity factors N-c, N-q and N-gamma have been computed, as a function of phi, both for smooth and rough circular foundations. The results obtained from the analysis compare quite well with the solutions reported from literature.