999 resultados para H-line graphs
Resumo:
Caveolae have been linked to diverse cellular functions and to many disease states. In this study we have used zebrafish to examine the role of caveolin-1 and caveolae during early embryonic development. During development, expression is apparent in a number of tissues including Kupffer's vesicle, tailbud, intersomite boundaries, heart, branchial arches, pronephric ducts and periderm. Particularly strong expression is observed in the sensory organs of the lateral line, the neuromasts and in the notochord where it overlaps with expression of caveolin-3. Morpholino-mediated downregulation of Cav1α caused a dramatic inhibition of neuromast formation. Detailed ultrastructural analysis, including electron tomography of the notochord, revealed that the central regions of the notochord has the highest density of caveolae of any embryonic tissue comparable to the highest density observed in any vertebrate tissue. In addition, Cav1α downregulation caused disruption of the notochord, an effect that was enhanced further by Cav3 knockdown. These results indicate an essential role for caveolin and caveolae in this vital structural and signalling component of the embryo.
Resumo:
With technology scaling, vulnerability to soft errors in random logic is increasing. There is a need for on-line error detection and protection for logic gates even at sea level. The error checker is the key element for an on-line detection mechanism. We compare three different checkers for error detection from the point of view of area, power and false error detection rates. We find that the double sampling checker (used in Razor), is the simplest and most area and power efficient, but suffers from very high false detection rates of 1.15 times the actual error rates. We also find that the alternate approaches of triple sampling and integrate and sample method (I&S) can be designed to have zero false detection rates, but at an increased area, power and implementation complexity. The triple sampling method has about 1.74 times the area and twice the power as compared to the Double Sampling method and also needs a complex clock generation scheme. The I&S method needs about 16% more power with 0.58 times the area as double sampling, but comes with more stringent implementation constraints as it requires detection of small voltage swings.
Resumo:
The Hadwiger number eta(G) of a graph G is the largest integer n for which the complete graph K-n on n vertices is a minor of G. Hadwiger conjectured that for every graph G, eta(G) >= chi(G), where chi(G) is the chromatic number of G. In this paper, we study the Hadwiger number of the Cartesian product G square H of graphs. As the main result of this paper, we prove that eta(G(1) square G(2)) >= h root 1 (1 - o(1)) for any two graphs G(1) and G(2) with eta(G(1)) = h and eta(G(2)) = l. We show that the above lower bound is asymptotically best possible when h >= l. This asymptotically settles a question of Z. Miller (1978). As consequences of our main result, we show the following: 1. Let G be a connected graph. Let G = G(1) square G(2) square ... square G(k) be the ( unique) prime factorization of G. Then G satisfies Hadwiger's conjecture if k >= 2 log log chi(G) + c', where c' is a constant. This improves the 2 log chi(G) + 3 bound in [2] 2. Let G(1) and G(2) be two graphs such that chi(G1) >= chi(G2) >= clog(1.5)(chi(G(1))), where c is a constant. Then G1 square G2 satisfies Hadwiger's conjecture. 3. Hadwiger's conjecture is true for G(d) (Cartesian product of G taken d times) for every graph G and every d >= 2. This settles a question by Chandran and Sivadasan [2]. ( They had shown that the Hadiwger's conjecture is true for G(d) if d >= 3).
Resumo:
In this paper we consider the problems of computing a minimum co-cycle basis and a minimum weakly fundamental co-cycle basis of a directed graph G. A co-cycle in G corresponds to a vertex partition (S,V ∖ S) and a { − 1,0,1} edge incidence vector is associated with each co-cycle. The vector space over ℚ generated by these vectors is the co-cycle space of G. Alternately, the co-cycle space is the orthogonal complement of the cycle space of G. The minimum co-cycle basis problem asks for a set of co-cycles that span the co-cycle space of G and whose sum of weights is minimum. Weakly fundamental co-cycle bases are a special class of co-cycle bases, these form a natural superclass of strictly fundamental co-cycle bases and it is known that computing a minimum weight strictly fundamental co-cycle basis is NP-hard. We show that the co-cycle basis corresponding to the cuts of a Gomory-Hu tree of the underlying undirected graph of G is a minimum co-cycle basis of G and it is also weakly fundamental.
Resumo:
The problem of automatic melody line identification in a MIDI file plays an important role towards taking QBH systems to the next level. We present here, a novel algorithm to identify the melody line in a polyphonic MIDI file. A note pruning and track/channel ranking method is used to identify the melody line. We use results from musicology to derive certain simple heuristics for the note pruning stage. This helps in the robustness of the algorithm, by way of discarding "spurious" notes. A ranking based on the melodic information in each track/channel enables us to choose the melody line accurately. Our algorithm makes no assumption about MIDI performer specific parameters, is simple and achieves an accuracy of 97% in identifying the melody line correctly. This algorithm is currently being used by us in a QBH system built in our lab.
Resumo:
We consider the problem of computing an approximate minimum cycle basis of an undirected edge-weighted graph G with m edges and n vertices; the extension to directed graphs is also discussed. In this problem, a {0,1} incidence vector is associated with each cycle and the vector space over F-2 generated by these vectors is the cycle space of G. A set of cycles is called a cycle basis of G if it forms a basis for its cycle space. A cycle basis where the sum of the weights of the cycles is minimum is called a minimum cycle basis of G. Cycle bases of low weight are useful in a number of contexts, e.g. the analysis of electrical networks, structural engineering, chemistry, and surface reconstruction. We present two new algorithms to compute an approximate minimum cycle basis. For any integer k >= 1, we give (2k - 1)-approximation algorithms with expected running time 0(kmn(1+2/k) + mn((1+1/k)(omega-1))) and deterministic running time 0(n(3+2/k)), respectively. Here omega is the best exponent of matrix multiplication. It is presently known that omega < 2.376. Both algorithms are o(m(omega)) for dense graphs. This is the first time that any algorithm which computes sparse cycle bases with a guarantee drops below the Theta(m(omega)) bound. We also present a 2-approximation algorithm with O(m(omega) root n log n) expected running time, a linear time 2-approximation algorithm for planar graphs and an O(n(3)) time 2.42-approximation algorithm for the complete Euclidean graph in the plane.
Resumo:
Background The irreversible ErbB family blocker afatinib and the reversible EGFR tyrosine kinase inhibitor gefitinib are approved for first-line treatment of EGFR mutation-positive non-small-cell lung cancer (NSCLC). We aimed to compare the efficacy and safety of afatinib and gefitinib in this setting. Methods This multicentre, international, open-label, exploratory, randomised controlled phase 2B trial (LUX-Lung 7) was done at 64 centres in 13 countries. Treatment-naive patients with stage IIIB or IV NSCLC and a common EGFR mutation (exon 19 deletion or Leu858Arg) were randomly assigned (1:1) to receive afatinib (40 mg per day) or gefitinib (250 mg per day) until disease progression, or beyond if deemed beneficial by the investigator. Randomisation, stratified by EGFR mutation type and status of brain metastases, was done centrally using a validated number generating system implemented via an interactive voice or web-based response system with a block size of four. Clinicians and patients were not masked to treatment allocation; independent review of tumour response was done in a blinded manner. Coprimary endpoints were progression-free survival by independent central review, time-to-treatment failure, and overall survival. Efficacy analyses were done in the intention-to-treat population and safety analyses were done in patients who received at least one dose of study drug. This ongoing study is registered with ClinicalTrials.gov, number NCT01466660. Findings Between Dec 13, 2011, and Aug 8, 2013, 319 patients were randomly assigned (160 to afatinib and 159 to gefitinib). Median follow-up was 27·3 months (IQR 15·3–33·9). Progression-free survival (median 11·0 months [95% CI 10·6–12·9] with afatinib vs 10·9 months [9·1–11·5] with gefitinib; hazard ratio [HR] 0·73 [95% CI 0·57–0·95], p=0·017) and time-to-treatment failure (median 13·7 months [95% CI 11·9–15·0] with afatinib vs 11·5 months [10·1–13·1] with gefitinib; HR 0·73 [95% CI 0·58–0·92], p=0·0073) were significantly longer with afatinib than with gefitinib. Overall survival data are not mature. The most common treatment-related grade 3 or 4 adverse events were diarrhoea (20 [13%] of 160 patients given afatinib vs two [1%] of 159 given gefitinib) and rash or acne (15 [9%] patients given afatinib vs five [3%] of those given gefitinib) and liver enzyme elevations (no patients given afatinib vs 14 [9%] of those given gefitinib). Serious treatment-related adverse events occurred in 17 (11%) patients in the afatinib group and seven (4%) in the gefitinib group. Ten (6%) patients in each group discontinued treatment due to drug-related adverse events. 15 (9%) fatal adverse events occurred in the afatinib group and ten (6%) in the gefitinib group. All but one of these deaths were considered unrelated to treatment; one patient in the gefitinib group died from drug-related hepatic and renal failure. Interpretation Afatinib significantly improved outcomes in treatment-naive patients with EGFR-mutated NSCLC compared with gefitinib, with a manageable tolerability profile. These data are potentially important for clinical decision making in this patient population.
Resumo:
Introduction Metastatic spread to the brain is common in patients with non–small cell lung cancer (NSCLC), but these patients are generally excluded from prospective clinical trials. The studies, phase III study of afatinib or cisplatin plus pemetrexed in patients with metastatic lung adenocarcinoma with EGFR mutations (LUX-Lung 3) and a randomized, open-label, phase III study of BIBW 2992 versus chemotherapy as first-line treatment for patients with stage IIIB or IV adenocarcinoma of the lung harbouring an EGFR activating mutation (LUX-Lung 6) investigated first-line afatinib versus platinum-based chemotherapy in epidermal growth factor receptor gene (EGFR) mutation-positive patients with NSCLC and included patients with brain metastases; prespecified subgroup analyses are assessed in this article. Methods For both LUX-Lung 3 and LUX-Lung 6, prespecified subgroup analyses of progression-free survival (PFS), overall survival, and objective response rate were undertaken in patients with asymptomatic brain metastases at baseline (n = 35 and n = 46, respectively). Post hoc analyses of clinical outcomes was undertaken in the combined data set (n = 81). Results In both studies, there was a trend toward improved PFS with afatinib versus chemotherapy in patients with brain metastases (LUX-Lung 3: 11.1 versus 5.4 months, hazard ratio [HR] = 0.54, p = 0.1378; LUX-Lung 6: 8.2 versus 4.7 months, HR = 0.47, p = 0.1060). The magnitude of PFS improvement with afatinib was similar to that observed in patients without brain metastases. In combined analysis, PFS was significantly improved with afatinib versus with chemotherapy in patients with brain metastases (8.2 versus 5.4 months; HR, 0.50; p = 0.0297). Afatinib significantly improved the objective response rate versus chemotherapy in patients with brain metastases. Safety findings were consistent with previous reports. Conclusions These findings lend support to the clinical activity of afatinib in EGFR mutation–positive patients with NSCLC and asymptomatic brain metastases.
Resumo:
Background: This multicentre, open-label, randomized, controlled phase II study evaluated cilengitide in combination with cetuximab and platinum-based chemotherapy, compared with cetuximab and chemotherapy alone, as first-line treatment of patients with advanced non-small-cell lung cancer (NSCLC). Patients and methods: Patients were randomized 1:1:1 to receive cetuximab plus platinum-based chemotherapy alone (control), or combined with cilengitide 2000 mg 1×/week i.v. (CIL-once) or 2×/week i.v. (CIL-twice). A protocol amendment limited enrolment to patients with epidermal growth factor receptor (EGFR) histoscore ≥200 and closed the CIL-twice arm for practical feasibility issues. Primary end point was progression-free survival (PFS; independent read); secondary end points included overall survival (OS), safety, and biomarker analyses. A comparison between the CIL-once and control arms is reported, both for the total cohorts, as well as for patients with EGFR histoscore ≥200. Results: There were 85 patients in the CIL-once group and 84 in the control group. The PFS (independent read) was 6.2 versus 5.0 months for CIL-once versus control [hazard ratio (HR) 0.72; P = 0.085]; for patients with EGFR histoscore ≥200, PFS was 6.8 versus 5.6 months, respectively (HR 0.57; P = 0.0446). Median OS was 13.6 for CIL-once versus 9.7 months for control (HR 0.81; P = 0.265). In patients with EGFR ≥200, OS was 13.2 versus 11.8 months, respectively (HR 0.95; P = 0.855). No major differences in adverse events between CIL-once and control were reported; nausea (59% versus 56%, respectively) and neutropenia (54% versus 46%, respectively) were the most frequent. There was no increased incidence of thromboembolic events or haemorrhage in cilengitide-treated patients. αvβ3 and αvβ5 expression was neither a predictive nor a prognostic indicator. Conclusions: The addition of cilengitide to cetuximab/chemotherapy indicated potential clinical activity, with a trend for PFS difference in the independent-read analysis. However, the observed inconsistencies across end points suggest additional investigations are required to substantiate a potential role of other integrin inhibitors in NSCLC treatment.
Resumo:
New stars form in dense interstellar clouds of gas and dust called molecular clouds. The actual sites where the process of star formation takes place are the dense clumps and cores deeply embedded in molecular clouds. The details of the star formation process are complex and not completely understood. Thus, determining the physical and chemical properties of molecular cloud cores is necessary for a better understanding of how stars are formed. Some of the main features of the origin of low-mass stars, like the Sun, are already relatively well-known, though many details of the process are still under debate. The mechanism through which high-mass stars form, on the other hand, is poorly understood. Although it is likely that the formation of high-mass stars shares many properties similar to those of low-mass stars, the very first steps of the evolutionary sequence are unclear. Observational studies of star formation are carried out particularly at infrared, submillimetre, millimetre, and radio wavelengths. Much of our knowledge about the early stages of star formation in our Milky Way galaxy is obtained through molecular spectral line and dust continuum observations. The continuum emission of cold dust is one of the best tracers of the column density of molecular hydrogen, the main constituent of molecular clouds. Consequently, dust continuum observations provide a powerful tool to map large portions across molecular clouds, and to identify the dense star-forming sites within them. Molecular line observations, on the other hand, provide information on the gas kinematics and temperature. Together, these two observational tools provide an efficient way to study the dense interstellar gas and the associated dust that form new stars. The properties of highly obscured young stars can be further examined through radio continuum observations at centimetre wavelengths. For example, radio continuum emission carries useful information on conditions in the protostar+disk interaction region where protostellar jets are launched. In this PhD thesis, we study the physical and chemical properties of dense clumps and cores in both low- and high-mass star-forming regions. The sources are mainly studied in a statistical sense, but also in more detail. In this way, we are able to examine the general characteristics of the early stages of star formation, cloud properties on large scales (such as fragmentation), and some of the initial conditions of the collapse process that leads to the formation of a star. The studies presented in this thesis are mainly based on molecular line and dust continuum observations. These are combined with archival observations at infrared wavelengths in order to study the protostellar content of the cloud cores. In addition, centimetre radio continuum emission from young stellar objects (YSOs; i.e., protostars and pre-main sequence stars) is studied in this thesis to determine their evolutionary stages. The main results of this thesis are as follows: i) filamentary and sheet-like molecular cloud structures, such as infrared dark clouds (IRDCs), are likely to be caused by supersonic turbulence but their fragmentation at the scale of cores could be due to gravo-thermal instability; ii) the core evolution in the Orion B9 star-forming region appears to be dynamic and the role played by slow ambipolar diffusion in the formation and collapse of the cores may not be significant; iii) the study of the R CrA star-forming region suggests that the centimetre radio emission properties of a YSO are likely to change with its evolutionary stage; iv) the IRDC G304.74+01.32 contains candidate high-mass starless cores which may represent the very first steps of high-mass star and star cluster formation; v) SiO outflow signatures are seen in several high-mass star-forming regions which suggest that high-mass stars form in a similar way as their low-mass counterparts, i.e., via disk accretion. The results presented in this thesis provide constraints on the initial conditions and early stages of both low- and high-mass star formation. In particular, this thesis presents several observational results on the early stages of clustered star formation, which is the dominant mode of star formation in our Galaxy.
Resumo:
In the title compound, C30H24Cl2N2O3, the two quinoline ring systems are almost planar [maximum deviations = 0.029 (2) and 0.018 (3) angstrom] and the dihedral angle between them is 4.17 (8)degrees. The dihedral angle between the phenyl ring and its attached quinoline ring is 69.06 (13)degrees. The packing is stabilized by C-H center dot center dot center dot O, C-H center dot center dot center dot N, weak pi-pi stacking [centroid-centroid distances = 3.7985 (16) and 3.7662(17) angstrom] and C-H center dot center dot center dot pi interactions.
Resumo:
Evaluation and design of shore protection works in the case of tsunamis assumes considerable importance in view of the impact it had in the recent tsunami of 26th December 2004 in India and other countries in Asia. The fact that there are no proper guidelines have made in the matters worse and resulted in the magnitude of damage that occurred. Survey of the damages indicated that the scour as a result of high velocities is one of the prime reasons for damages in the case of simple structures. It is revealed that sea walls in some cases have been helpful to minimize the damages. The objective of this paper is to suggest that design of shore line protection systems using expected wave heights that get generated and use of flexible systems such as geocells is likely to give a better protection. The protection systems can be designed to withstand the wave forces that corresponding to different probabilities of incidence. A design approach of geocells protection system is suggested and illustrated with reference to the data of wave heights in the east coast of India.
Resumo:
At the time of restoration transmission line switching is one of the major causes, which creates transient overvoltages. Though detailed Electro Magnetic Transient studies are carried out extensively for the planning and design of transmission systems, such studies are not common in a day-today operation of power systems. However it is important for the operator to ensure during restoration of supply that peak overvoltages resulting from the switching operations are well within safe limits. This paper presents a support vector machine approach to classify the various cases of line energization in the category of safe or unsafe based upon the peak value of overvoltage at the receiving end of line. Operator can define the threshold value of voltage to assign the data pattern in either of the class. For illustration of proposed approach the power system used for switching transient peak overvoltages tests is a 400 kV equivalent system of an Indian southern gri
Resumo:
Background: Using array comparative genomic hybridization (aCGH), a large number of deleted genomic regions have been identified in human cancers. However, subsequent efforts to identify target genes selected for inactivation in these regions have often been challenging. Methods: We integrated here genome-wide copy number data with gene expression data and non-sense mediated mRNA decay rates in breast cancer cell lines to prioritize gene candidates that are likely to be tumour suppressor genes inactivated by bi-allelic genetic events. The candidates were sequenced to identify potential mutations. Results: This integrated genomic approach led to the identification of RIC8A at 11p15 as a putative candidate target gene for the genomic deletion in the ZR-75-1 breast cancer cell line. We identified a truncating mutation in this cell line, leading to loss of expression and rapid decay of the transcript. We screened 127 breast cancers for RIC8A mutations, but did not find any pathogenic mutations. No promoter hypermethylation in these tumours was detected either. However, analysis of gene expression data from breast tumours identified a small group of aggressive tumours that displayed low levels of RIC8A transcripts. qRT-PCR analysis of 38 breast tumours showed a strong association between low RIC8A expression and the presence of TP53 mutations (P = 0.006). Conclusion: We demonstrate a data integration strategy leading to the identification of RIC8A as a gene undergoing a classical double-hit genetic inactivation in a breast cancer cell line, as well as in vivo evidence of loss of RIC8A expression in a subgroup of aggressive TP53 mutant breast cancers.
Resumo:
The boxicity of a graph G, denoted box(G), is the least integer d such that G is the intersection graph of a family of d-dimensional (axis-parallel) boxes. The cubicity, denoted cub(G), is the least dsuch that G is the intersection graph of a family of d-dimensional unit cubes. An independent set of three vertices is an asteroidal triple if any two are joined by a path avoiding the neighbourhood of the third. A graph is asteroidal triple free (AT-free) if it has no asteroidal triple. The claw number psi(G) is the number of edges in the largest star that is an induced subgraph of G. For an AT-free graph G with chromatic number chi(G) and claw number psi(G), we show that box(G) <= chi(C) and that this bound is sharp. We also show that cub(G) <= box(G)([log(2) psi(G)] + 2) <= chi(G)([log(2) psi(G)] + 2). If G is an AT-free graph having girth at least 5, then box(G) <= 2, and therefore cub(G) <= 2 [log(2) psi(G)] + 4. (c) 2010 Elsevier B.V. All rights reserved.