952 resultados para Non-Archimedean Real Closed Fields
Resumo:
A unit cube in k dimensions (k-cube) is defined as the Cartesian product R-1 x R-2 x ... x R-k where R-i (for 1 <= i <= k) is a closed interval of the form [a(i), a(i) + 1] on the real line. A graph G on n nodes is said to be representable as the intersection of k-cubes (cube representation in k dimensions) if each vertex of C can be mapped to a k-cube such that two vertices are adjacent in G if and only if their corresponding k-cubes have a non-empty intersection. The cubicity of G denoted as cub(G) is the minimum k for which G can be represented as the intersection of k-cubes. An interesting aspect about cubicity is that many problems known to be NP-complete for general graphs have polynomial time deterministic algorithms or have good approximation ratios in graphs of low cubicity. In most of these algorithms, computing a low dimensional cube representation of the given graph is usually the first step. We give an O(bw . n) algorithm to compute the cube representation of a general graph G in bw + 1 dimensions given a bandwidth ordering of the vertices of G, where bw is the bandwidth of G. As a consequence, we get O(Delta) upper bounds on the cubicity of many well-known graph classes such as AT-free graphs, circular-arc graphs and cocomparability graphs which have O(Delta) bandwidth. Thus we have: 1. cub(G) <= 3 Delta - 1, if G is an AT-free graph. 2. cub(G) <= 2 Delta + 1, if G is a circular-arc graph. 3. cub(G) <= 2 Delta, if G is a cocomparability graph. Also for these graph classes, there axe constant factor approximation algorithms for bandwidth computation that generate orderings of vertices with O(Delta) width. We can thus generate the cube representation of such graphs in O(Delta) dimensions in polynomial time.
Resumo:
Support Vector Machines(SVMs) are hyperplane classifiers defined in a kernel induced feature space. The data size dependent training time complexity of SVMs usually prohibits its use in applications involving more than a few thousands of data points. In this paper we propose a novel kernel based incremental data clustering approach and its use for scaling Non-linear Support Vector Machines to handle large data sets. The clustering method introduced can find cluster abstractions of the training data in a kernel induced feature space. These cluster abstractions are then used for selective sampling based training of Support Vector Machines to reduce the training time without compromising the generalization performance. Experiments done with real world datasets show that this approach gives good generalization performance at reasonable computational expense.
Resumo:
Space-time block codes (STBCs) obtained from non-square complex orthogonal designs are bandwidth efficient compared to those from square real/complex orthogonal designs for colocated coherent MIMO systems and has other applications in (i) non-coherent MIMO systems with non-differential detection, (ii) Space-Time-Frequency codes for MIMO-OFDM systems and (iii) distributed space-time coding for relay channels. Liang (IEEE Trans. Inform. Theory, 2003) has constructed maximal rate non-square designs for any number of antennas, with rates given by [(a+1)/(2a)] when number of transmit antennas is 2a-1 or 2a. However, these designs have large delays. When large number of antennas are considered this rate is close to 1/2. Tarokh et al (IEEE Trans. Inform. Theory, 1999) have constructed rate 1/2 non-square CODs using the rate-1 real orthogonal designs for any number of antennas, where the decoding delay of these codes is less compared to the codes constructed by Liang for number of transmit antennas more than 5. In this paper, we construct a class of rate-1/2 codes for arbitrary number of antennas where the decoding delay is reduced by 50% when compared with the rate-1/2 codes given by Tarokh et al. It is also shown that even though scaling the variables helps to lower the delay it can not be used to increase the rate.
Resumo:
In this paper, we consider the machining condition optimization models presented in earlier studies. Finding the optimal combination of machining conditions within the constraints is a difficult task. Hence, in earlier studies standard optimization methods are used. The non-linear nature of the objective function, and the constraints that need to be satisfied makes it difficult to use the standard optimization methods for the solution. In this paper, we present a real coded genetic algorithm (RCGA), to find the optimal combination of machining conditions. We present various issues related to real coded genetic algorithm such as solution representation, crossover operators, and repair algorithm in detail. We also present the results obtained for these models using real coded genetic algorithm and discuss the advantages of using real coded genetic algorithm for these problems. From the results obtained, we conclude that real coded genetic algorithm is reliable and accurate for solving the machining condition optimization models.
Resumo:
This thesis consists of four research papers and an introduction providing some background. The structure in the universe is generally considered to originate from quantum fluctuations in the very early universe. The standard lore of cosmology states that the primordial perturbations are almost scale-invariant, adiabatic, and Gaussian. A snapshot of the structure from the time when the universe became transparent can be seen in the cosmic microwave background (CMB). For a long time mainly the power spectrum of the CMB temperature fluctuations has been used to obtain observational constraints, especially on deviations from scale-invariance and pure adiabacity. Non-Gaussian perturbations provide a novel and very promising way to test theoretical predictions. They probe beyond the power spectrum, or two point correlator, since non-Gaussianity involves higher order statistics. The thesis concentrates on the non-Gaussian perturbations arising in several situations involving two scalar fields, namely, hybrid inflation and various forms of preheating. First we go through some basic concepts -- such as the cosmological inflation, reheating and preheating, and the role of scalar fields during inflation -- which are necessary for the understanding of the research papers. We also review the standard linear cosmological perturbation theory. The second order perturbation theory formalism for two scalar fields is developed. We explain what is meant by non-Gaussian perturbations, and discuss some difficulties in parametrisation and observation. In particular, we concentrate on the nonlinearity parameter. The prospects of observing non-Gaussianity are briefly discussed. We apply the formalism and calculate the evolution of the second order curvature perturbation during hybrid inflation. We estimate the amount of non-Gaussianity in the model and find that there is a possibility for an observational effect. The non-Gaussianity arising in preheating is also studied. We find that the level produced by the simplest model of instant preheating is insignificant, whereas standard preheating with parametric resonance as well as tachyonic preheating are prone to easily saturate and even exceed the observational limits. We also mention other approaches to the study of primordial non-Gaussianities, which differ from the perturbation theory method chosen in the thesis work.
Resumo:
A Linear Processing Complex Orthogonal Design (LPCOD) is a p x n matrix epsilon, (p >= n) in k complex indeterminates x(1), x(2),..., x(k) such that (i) the entries of epsilon are complex linear combinations of 0, +/- x(i), i = 1,..., k and their conjugates, (ii) epsilon(H)epsilon = D, where epsilon(H) is the Hermitian (conjugate transpose) of epsilon and D is a diagonal matrix with the (i, i)-th diagonal element of the form l(1)((i))vertical bar x(1)vertical bar(2) + l(2)((i))vertical bar x(2)vertical bar(2)+...+ l(k)((i))vertical bar x(k)vertical bar(2) where l(j)((i)), i = 1, 2,..., n, j = 1, 2,...,k are strictly positive real numbers and the condition l(1)((i)) = l(2)((i)) = ... = l(k)((i)), called the equal-weights condition, holds for all values of i. For square designs it is known. that whenever a LPCOD exists without the equal-weights condition satisfied then there exists another LPCOD with identical parameters with l(1)((i)) = l(2)((i)) = ... = l(k)((i)) = 1. This implies that the maximum possible rate for square LPCODs without the equal-weights condition is the same as that or square LPCODs with equal-weights condition. In this paper, this result is extended to a subclass of non-square LPCODs. It is shown that, a set of sufficient conditions is identified such that whenever a non-square (p > n) LPCOD satisfies these sufficient conditions and do not satisfy the equal-weights condition, then there exists another LPCOD with the same parameters n, k and p in the same complex indeterminates with l(1)((i)) = l(2)((i)) = ... = l(k)((i)) = 1.
Resumo:
Background: This multicentre, open-label, randomized, controlled phase II study evaluated cilengitide in combination with cetuximab and platinum-based chemotherapy, compared with cetuximab and chemotherapy alone, as first-line treatment of patients with advanced non-small-cell lung cancer (NSCLC). Patients and methods: Patients were randomized 1:1:1 to receive cetuximab plus platinum-based chemotherapy alone (control), or combined with cilengitide 2000 mg 1×/week i.v. (CIL-once) or 2×/week i.v. (CIL-twice). A protocol amendment limited enrolment to patients with epidermal growth factor receptor (EGFR) histoscore ≥200 and closed the CIL-twice arm for practical feasibility issues. Primary end point was progression-free survival (PFS; independent read); secondary end points included overall survival (OS), safety, and biomarker analyses. A comparison between the CIL-once and control arms is reported, both for the total cohorts, as well as for patients with EGFR histoscore ≥200. Results: There were 85 patients in the CIL-once group and 84 in the control group. The PFS (independent read) was 6.2 versus 5.0 months for CIL-once versus control [hazard ratio (HR) 0.72; P = 0.085]; for patients with EGFR histoscore ≥200, PFS was 6.8 versus 5.6 months, respectively (HR 0.57; P = 0.0446). Median OS was 13.6 for CIL-once versus 9.7 months for control (HR 0.81; P = 0.265). In patients with EGFR ≥200, OS was 13.2 versus 11.8 months, respectively (HR 0.95; P = 0.855). No major differences in adverse events between CIL-once and control were reported; nausea (59% versus 56%, respectively) and neutropenia (54% versus 46%, respectively) were the most frequent. There was no increased incidence of thromboembolic events or haemorrhage in cilengitide-treated patients. αvβ3 and αvβ5 expression was neither a predictive nor a prognostic indicator. Conclusions: The addition of cilengitide to cetuximab/chemotherapy indicated potential clinical activity, with a trend for PFS difference in the independent-read analysis. However, the observed inconsistencies across end points suggest additional investigations are required to substantiate a potential role of other integrin inhibitors in NSCLC treatment.
Resumo:
Cosmological inflation is the dominant paradigm in explaining the origin of structure in the universe. According to the inflationary scenario, there has been a period of nearly exponential expansion in the very early universe, long before the nucleosynthesis. Inflation is commonly considered as a consequence of some scalar field or fields whose energy density starts to dominate the universe. The inflationary expansion converts the quantum fluctuations of the fields into classical perturbations on superhorizon scales and these primordial perturbations are the seeds of the structure in the universe. Moreover, inflation also naturally explains the high degree of homogeneity and spatial flatness of the early universe. The real challenge of the inflationary cosmology lies in trying to establish a connection between the fields driving inflation and theories of particle physics. In this thesis we concentrate on inflationary models at scales well below the Planck scale. The low scale allows us to seek for candidates for the inflationary matter within extensions of the Standard Model but typically also implies fine-tuning problems. We discuss a low scale model where inflation is driven by a flat direction of the Minimally Supersymmetric Standard Model. The relation between the potential along the flat direction and the underlying supergravity model is studied. The low inflationary scale requires an extremely flat potential but we find that in this particular model the associated fine-tuning problems can be solved in a rather natural fashion in a class of supergravity models. For this class of models, the flatness is a consequence of the structure of the supergravity model and is insensitive to the vacuum expectation values of the fields that break supersymmetry. Another low scale model considered in the thesis is the curvaton scenario where the primordial perturbations originate from quantum fluctuations of a curvaton field, which is different from the fields driving inflation. The curvaton gives a negligible contribution to the total energy density during inflation but its perturbations become significant in the post-inflationary epoch. The separation between the fields driving inflation and the fields giving rise to primordial perturbations opens up new possibilities to lower the inflationary scale without introducing fine-tuning problems. The curvaton model typically gives rise to relatively large level of non-gaussian features in the statistics of primordial perturbations. We find that the level of non-gaussian effects is heavily dependent on the form of the curvaton potential. Future observations that provide more accurate information of the non-gaussian statistics can therefore place constraining bounds on the curvaton interactions.
Resumo:
In this thesis we examine multi-field inflationary models of the early Universe. Since non-Gaussianities may allow for the possibility to discriminate between models of inflation, we compute deviations from a Gaussian spectrum of primordial perturbations by extending the delta-N formalism. We use N-flation as a concrete model; our findings show that these models are generically indistinguishable as long as the slow roll approximation is still valid. Besides computing non-Guassinities, we also investigate Preheating after multi-field inflation. Within the framework of N-flation, we find that preheating via parametric resonance is suppressed, an indication that it is the old theory of preheating that is applicable. In addition to studying non-Gaussianities and preheatng in multi-field inflationary models, we study magnetogenesis in the early universe. To this aim, we propose a mechanism to generate primordial magnetic fields via rotating cosmic string loops. Magnetic fields in the micro-Gauss range have been observed in galaxies and clusters, but their origin has remained elusive. We consider a network of strings and find that rotating cosmic string loops, which are continuously produced in such networks, are viable candidates for magnetogenesis with relevant strength and length scales, provided we use a high string tension and an efficient dynamo.
Resumo:
In this paper, a new approach to the study of non-linear, non-autonomous systems is presented. The method outlined is based on the idea of solving the governing differential equations of order n by a process of successive reduction of their order. This is achieved by the use of “differential transformation functions”. The value of the technique presented in the study of problems arising in the field of non-linear mechanics and the like, is illustrated by means of suitable examples drawn from different fields such as vibrations, rigid body dynamics, etc.
Resumo:
The flow generated by the rotation of a sphere in an infinitely extending fluid has recently been studied by Goldshtik. The corresponding problem for non-Newtonian Reiner-Rivlin fluids has been studied by Datta. Bhatnagar and Rajeswari have studied the secondary flow between two concentric spheres rotating about an axis in the non-Newtonian fluids. This last investigation was further generalised by Rajeswari to include the effects of small radial suction or injection. In Part A of the present investigation, we have studied the secondary flow generated by the slow rotation of a single sphere in non-Newtonian fluid obeying the Rivlin-Ericksen constitutive equation. In Part B, the effects of small suction or injection have been studied which is applied in an arbitrary direction at the surface of the sphere. In the absence of suction or injection, the secondary flow for small values of the visco-elastic parameter is similar to that of Newtonian fluids with inclusion of inertia terms in the Oseen approximation. If this parameter exceeds Kc = 18R/219, whereR is the Reynolds number, the breaking of the flow field takes place into two domains, in one of which the stream lines form closed loops. For still higher values of this parameter, the complete reversal of the sense of the flow takes place. When suction or injection is included, the breaking of the flow persists under certain condition investigated in this paper. When this condition is broken, the breaking of the flow is obliterated.
Resumo:
It is shown that a sufficient condition for the asymptotic stability-in-the-large of an autonomous system containing a linear part with transfer function G(jω) and a non-linearity belonging to a class of power-law non-linearities with slope restriction [0, K] in cascade in a negative feedback loop is ReZ(jω)[G(jω) + 1 K] ≥ 0 for all ω where the multiplier is given by, Z(jω) = 1 + αjω + Y(jω) - Y(-jω) with a real, y(t) = 0 for t < 0 and ∫ 0 ∞ |y(t)|dt < 1 2c2, c2 being a constant associated with the class of non-linearity. Any allowable multiplier can be converted to the above form and this form leads to lesser restrictions on the parameters in many cases. Criteria for the case of odd monotonic non-linearities and of linear gains are obtained as limiting cases of the criterion developed. A striking feature of the present result is that in the linear case it reduces to the necessary and sufficient conditions corresponding to the Nyquist criterion. An inequality of the type |R(T) - R(- T)| ≤ 2c2R(0) where R(T) is the input-output cross-correlation function of the non-linearity, is used in deriving the results.
Resumo:
In a search for new phenomena in a signature suppressed in the standard model of elementary particles (SM), we compare the inclusive production of events containing a lepton, a photon, significant transverse momentum imbalance (MET), and a jet identified as containing a b-quark, to SM predictions. The search uses data produced in proton-antiproton collisions at 1.96 TeV corresponding to 1.9 fb-1 of integrated luminosity taken with the CDF detector at the Fermilab Tevatron. We find 28 lepton+photon+MET+b events versus an expectation of 31.0+4.1/-3.5 events. If we further require events to contain at least three jets and large total transverse energy, simulations predict that the largest SM source is top-quark pair production with an additional radiated photon, ttbar+photon. In the data we observe 16 ttbar+photon candidate events versus an expectation from SM sources of 11.2+2.3/-2.1. Assuming the difference between the observed number and the predicted non-top-quark total is due to SM top quark production, we estimate the ttg cross section to be 0.15 +- 0.08 pb.
Resumo:
This study considers the scheduling problem observed in the burn-in operation of semiconductor final testing, where jobs are associated with release times, due dates, processing times, sizes, and non-agreeable release times and due dates. The burn-in oven is modeled as a batch-processing machine which can process a batch of several jobs as long as the total sizes of the jobs do not exceed the machine capacity and the processing time of a batch is equal to the longest time among all the jobs in the batch. Due to the importance of on-time delivery in semiconductor manufacturing, the objective measure of this problem is to minimize total weighted tardiness. We have formulated the scheduling problem into an integer linear programming model and empirically show its computational intractability. Due to the computational intractability, we propose a few simple greedy heuristic algorithms and meta-heuristic algorithm, simulated annealing (SA). A series of computational experiments are conducted to evaluate the performance of the proposed heuristic algorithms in comparison with exact solution on various small-size problem instances and in comparison with estimated optimal solution on various real-life large size problem instances. The computational results show that the SA algorithm, with initial solution obtained using our own proposed greedy heuristic algorithm, consistently finds a robust solution in a reasonable amount of computation time.
Resumo:
Here the design and operation of a novel transmission electron microscope (TEM) triboprobe instrument with real-time vision control for advanced in situ electron microscopy is demonstrated. The NanoLAB triboprobe incorporates a new high stiffness coarse slider design for increased stability and positioning performance. This is linked with an advanced software control system which introduces both new and flexible in situ experimental functional testing modes, plus an automated vision control feedback system. This advancement in instrumentation design unlocks new possibilities of performing a range of new dynamical nanoscale materials tests, including novel friction and fatigue experiments inside the electron microscope.