43 resultados para problems with child neglect reporting
em Indian Institute of Science - Bangalore - Índia
Resumo:
The accelerated rate of increase in atmospheric CO2 concentration in recent years has revived the idea of stabilizing the global climate through geoengineering schemes. Majority of the proposed geoengineering schemes will attempt to reduce the amount of solar radiation absorbed by our planet. Climate modelling studies of these so called 'sunshade geoengineering schemes' show that global warming from increasing concentrations of CO2 can be mitigated by intentionally manipulating the amount of sunlight absorbed by the climate system. These studies also suggest that the residual changes could be large on regional scales, so that climate change may not be mitigated on a local basis. More recent modelling studies have shown that these schemes could lead to a slow-down in the global hydrological cycle. Other problems such as changes in the terrestrial carbon cycle and ocean acidification remain unsolved by sunshade geoengineering schemes. In this article, I review the proposed geoengineering schemes, results from climate models and discuss why geoengineering is not the best option to deal with climate change.
Resumo:
Aircraft pursuit-evasion encounters in a plane with variable speeds are analysed as a differential game. An engagement-dependent coordinate system confers open-loop optimality on the game. Each aircraft's optimal motion can be represented by extremel trajectory maps which are independent of role, adversary and capture radius. These maps are used in two different ways to construct the feedback solution. Some examples are given to illustrate these features. The paper draws on earlier results and surveys several existing papers on the subject.
Resumo:
In linear elastic fracture mechanics (LEFM), Irwin's crack closure integral (CCI) is one of the signficant concepts for the estimation of strain energy release rates (SERR) G, in individual as well as mixed-mode configurations. For effective utilization of this concept in conjunction with the finite element method (FEM), Rybicki and Kanninen [Engng Fracture Mech. 9, 931 938 (1977)] have proposed simple and direct estimations of the CCI in terms of nodal forces and displacements in the elements forming the crack tip from a single finite element analysis instead of the conventional two configuration analyses. These modified CCI (MCCI) expressions are basically element dependent. A systematic derivation of these expressions using element stress and displacement distributions is required. In the present work, a general procedure is given for the derivation of MCCI expressions in 3D problems with cracks. Further, a concept of sub-area integration is proposed which facilitates evaluation of SERR at a large number of points along the crack front without refining the finite element mesh. Numerical data are presented for two standard problems, a thick centre-cracked tension specimen and a semi-elliptical surface crack in a thick slab. Estimates for the stress intensity factor based on MCCI expressions corresponding to eight-noded brick elements are obtained and compared with available results in the literature.
Resumo:
We consider the two-parameter Sturm–Liouville system $$ -y_1''+q_1y_1=(\lambda r_{11}+\mu r_{12})y_1\quad\text{on }[0,1], $$ with the boundary conditions $$ \frac{y_1'(0)}{y_1(0)}=\cot\alpha_1\quad\text{and}\quad\frac{y_1'(1)}{y_1(1)}=\frac{a_1\lambda+b_1}{c_1\lambda+d_1}, $$ and $$ -y_2''+q_2y_2=(\lambda r_{21}+\mu r_{22})y_2\quad\text{on }[0,1], $$ with the boundary conditions $$ \frac{y_2'(0)}{y_2(0)} =\cot\alpha_2\quad\text{and}\quad\frac{y_2'(1)}{y_2(1)}=\frac{a_2\mu+b_2}{c_2\mu+d_2}, $$ subject to the uniform-left-definite and uniform-ellipticity conditions; where $q_{i}$ and $r_{ij}$ are continuous real valued functions on $[0,1]$, the angle $\alpha_{i}$ is in $[0,\pi)$ and $a_{i}$, $b_{i}$, $c_{i}$, $d_{i}$ are real numbers with $\delta_{i}=a_{i}d_{i}-b_{i}c_{i}>0$ and $c_{i}\neq0$ for $i,j=1,2$. Results are given on asymptotics, oscillation of eigenfunctions and location of eigenvalues.
Resumo:
We study a system of ordinary differential equations linked by parameters and subject to boundary conditions depending on parameters. We assume certain definiteness conditions on the coefficient functions and on the boundary conditions that yield, in the corresponding abstract setting, a right-definite case. We give results on location of the eigenvalues and oscillation of the eigenfunctions.
Resumo:
We develop a quadratic C degrees interior penalty method for linear fourth order boundary value problems with essential and natural boundary conditions of the Cahn-Hilliard type. Both a priori and a posteriori error estimates are derived. The performance of the method is illustrated by numerical experiments.
Resumo:
This paper presents a singular edge-based smoothed finite element method (sES-FEM) for mechanics problems with singular stress fields of arbitrary order. The sES-FEM uses a basic mesh of three-noded linear triangular (T3) elements and a special layer of five-noded singular triangular elements (sT5) connected to the singular-point of the stress field. The sT5 element has an additional node on each of the two edges connected to the singular-point. It allows us to represent simple and efficient enrichment with desired terms for the displacement field near the singular-point with the satisfaction of partition-of-unity property. The stiffness matrix of the discretized system is then obtained using the assumed displacement values (not the derivatives) over smoothing domains associated with the edges of elements. An adaptive procedure for the sES-FEM is proposed to enhance the quality of the solution with minimized number of nodes. Several numerical examples are provided to validate the reliability of the present sES-FEM method. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Transductive SVM (TSVM) is a well known semi-supervised large margin learning method for binary text classification. In this paper we extend this method to multi-class and hierarchical classification problems. We point out that the determination of labels of unlabeled examples with fixed classifier weights is a linear programming problem. We devise an efficient technique for solving it. The method is applicable to general loss functions. We demonstrate the value of the new method using large margin loss on a number of multi-class and hierarchical classification datasets. For maxent loss we show empirically that our method is better than expectation regularization/constraint and posterior regularization methods, and competitive with the version of entropy regularization method which uses label constraints.
Resumo:
This paper presents a simple technique for reducing the computational effort while solving any geotechnical stability problem by using the upper bound finite element limit analysis and linear optimization. In the proposed method, the problem domain is discretized into a number of different regions in which a particular order (number of sides) of the polygon is chosen to linearize the Mohr-Coulomb yield criterion. A greater order of the polygon needs to be selected only in that region wherein the rate of the plastic strains becomes higher. The computational effort required to solve the problem with this implementation reduces considerably. By using the proposed method, the bearing capacity has been computed for smooth and rough strip footings and the results are found to be quite satisfactory.
Resumo:
The Exact Cover problem takes a universe U of n elements, a family F of m subsets of U and a positive integer k, and decides whether there exists a subfamily(set cover) F' of size at most k such that each element is covered by exactly one set. The Unique Cover problem also takes the same input and decides whether there is a subfamily F' subset of F such that at least k of the elements F' covers are covered uniquely(by exactly one set). Both these problems are known to be NP-complete. In the parameterized setting, when parameterized by k, Exact Cover is W1]-hard. While Unique Cover is FPT under the same parameter, it is known to not admit a polynomial kernel under standard complexity-theoretic assumptions. In this paper, we investigate these two problems under the assumption that every set satisfies a given geometric property Pi. Specifically, we consider the universe to be a set of n points in a real space R-d, d being a positive integer. When d = 2 we consider the problem when. requires all sets to be unit squares or lines. When d > 2, we consider the problem where. requires all sets to be hyperplanes in R-d. These special versions of the problems are also known to be NP-complete. When parameterizing by k, the Unique Cover problem has a polynomial size kernel for all the above geometric versions. The Exact Cover problem turns out to be W1]-hard for squares, but FPT for lines and hyperplanes. Further, we also consider the Unique Set Cover problem, which takes the same input and decides whether there is a set cover which covers at least k elements uniquely. To the best of our knowledge, this is a new problem, and we show that it is NP-complete (even for the case of lines). In fact, the problem turns out to be W1]-hard in the abstract setting, when parameterized by k. However, when we restrict ourselves to the lines and hyperplanes versions, we obtain FPT algorithms.
Resumo:
The development of techniques for scaling up classifiers so that they can be applied to problems with large datasets of training examples is one of the objectives of data mining. Recently, AdaBoost has become popular among machine learning community thanks to its promising results across a variety of applications. However, training AdaBoost on large datasets is a major problem, especially when the dimensionality of the data is very high. This paper discusses the effect of high dimensionality on the training process of AdaBoost. Two preprocessing options to reduce dimensionality, namely the principal component analysis and random projection are briefly examined. Random projection subject to a probabilistic length preserving transformation is explored further as a computationally light preprocessing step. The experimental results obtained demonstrate the effectiveness of the proposed training process for handling high dimensional large datasets.
Resumo:
A continuum method of analysis is presented in this paper for the problem of a smooth rigid pin in a finite composite plate subjected to uniaxial loading. The pin could be of interference, push or clearance fit. The plate is idealized to an orthotropic sheet. As the load on the plate is progressively increased, the contact along the pin-hole interface is partial above certain load levels in all three types of fit. In misfit pins (interference or clearance), such situations result in mixed boundary value problems with moving boundaries and in all of them the arc of contact and the stress and displacement fields vary nonlinearly with the applied load. In infinite domains similar problems were analysed earlier by ‘inverse formulation’ and, now, the same approach is selected for finite plates. Finite outer domains introduce analytical complexities in the satisfaction of boundary conditions. These problems are circumvented by adopting a method in which the successive integrals of boundary error functions are equated to zero. Numerical results are presented which bring out the effects of the rectangular geometry and the orthotropic property of the plate. The present solutions are the first step towards the development of special finite elements for fastener joints.
Resumo:
The “partition method” or “sub-domain method” consists of expressing the solution of a governing differential equation, partial or ordinary, in terms of functions which satisfy the boundary conditions and setting to zero the error in the differential equation integrated over each of the sub-domains into which the given domain is partitioned. In this paper, the use of this method in eigenvalue problems with particular reference to vibration of plates is investigated. The deflection of the plate is expressed in terms of polynomials satisfying the boundary conditions completely. Setting the integrated error in each of the subdomains to zero results in a set of simultaneous, linear, homogeneous, algebraic equations in the undetermined coefficients of the deflection series. The algebraic eigenvalue problem is then solved for eigenvalues and eigenvectors. Convergence is examined in a few typical cases and is found to be satisfactory. The results obtained are compared with existing results based on other methods and are found to be in very good agreement.
Resumo:
The “partition method” or “sub-domain method” consists of expressing the solution of a governing differential equation, partial or ordinary, in terms of functions which satisfy the boundary conditions and setting to zero the error in the differential equation integrated over each of the sub-domains into which the given domain is partitioned. In this paper, the use of this method in eigenvalue problems with particular reference to vibration of plates is investigated. The deflection of the plate is expressed in terms of polynomials satisfying the boundary conditions completely. Setting the integrated error in each of the subdomains to zero results in a set of simultaneous, linear, homogeneous, algebraic equations in the undetermined coefficients of the deflection series. The algebraic eigenvalue problem is then solved for eigenvalues and eigenvectors. Convergence is examined in a few typical cases and is found to be satisfactory. The results obtained are compared with existing results based on other methods and are found to be in very good agreement.
Resumo:
We consider the problem of scheduling semiconductor burn-in operations, where burn-in ovens are modelled as batch processing machines. Most of the studies assume that ready times and due dates of jobs are agreeable (i.e., ri < rj implies di ≤ dj). In many real world applications, the agreeable property assumption does not hold. Therefore, in this paper, scheduling of a single burn-in oven with non-agreeable release times and due dates along with non-identical job sizes as well as non-identical processing of time problem is formulated as a Non-Linear (0-1) Integer Programming optimisation problem. The objective measure of the problem is minimising the maximum completion time (makespan) of all jobs. Due to computational intractability, we have proposed four variants of a two-phase greedy heuristic algorithm. Computational experiments indicate that two out of four proposed algorithms have excellent average performance and also capable of solving any large-scale real life problems with a relatively low computational effort on a Pentium IV computer.