999 resultados para uniform linear hypothesis
Resumo:
Two polymeric azido bridged complexes [Ni2L2(N-3)(3)](n)(ClO4). (1) and [Cu(bpdS)(2)(N-3)],(ClO4),(H2O)(2.5n) (2) [L = Schiff base, obtained from the condensation of pyridine-2-aldehyde with N,N,2,2-tetramethyl-1,3-propanediamine; bpds = 4,4'-bipyridyl disulfide] have been synthesized and their crystal structures have been determined. Complex 1, C26H42ClN15Ni2O4, crystallizes in a triclinic system, space group P1 with a 8.089(13), b = 9.392(14), c = 12.267(18) angstrom, a = 107.28(l), b 95.95(1), gamma = 96.92(1)degrees and Z = 2; complex 2, C20H21ClCuN7O6.5S4, crystallizes in an orthorhombic system, space group Pnna with a = 10.839(14), b = 13.208(17), c = 19.75(2) angstrom and Z = 4. The crystal structure of I consists of 1D polymers of nickel(L) units, alternatively connected by single and double bridging mu-(1,3-N-3) ligand with isolated perchlorate anions. Variable temperature magnetic susceptibility data of the complex have been measured and the fitting,of magnetic data was carried out applying the Borris-Almenar formula for such types of alternating one-dimensional S = 1 systems, based on the Hamiltonian H = -J Sigma(S2iS2i-1 + aS(2i)S(2i+1)). The best-fit parameters obtained are J = -106.7 +/- 2 cm(-1); a = 0.82 +/- 0.02; g = 2.21 +/- 0.02. Complex 2 is a 2D network of 4,4 topology with the nodes occupied by the Cu-II ions, and the edges formed by single azide and double bpds connectors. The perchlorate anions are located between pairs of bpds. The magnetic data have been fitted considering the complex as a pseudo-one-dimensional system, with all copper((II)) atoms linked by [mu(1,3-azido) bridging ligands at axial positions (long Cu...N-3 distances) since the coupling through long bpds is almost nil. The best-fit parameters obtained with this model are J = -1.21 +/- 0.2 cm(-1), g 2.14 +/- 0.02. (c) Wiley-VCH Verlag GmbH & Co. KGaA, 69451 Weinheim, Germany, 2005).
Resumo:
A new linear trinuclear nickel(II) complex, [Ni-3(salme)(2)(OCn)(4)] (Hsalme = 2-[(3-methylamino-propylimino)-methyl]-phenol, OCn = cinnamate), showing weak ferromagnetic coupling (J = 1.8(1) cm(-1)) through phenoxo and a novel tridentate bridging mode (1 kappa(OO)-O-2':2 kappa O') of the cinnamate ligand has been synthesized and structurally characterized by X-ray crystallography. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Three new linear trinuclear nickel(II) complexes, [Ni-3(salpen)(2)(OAc)(2)(H2O)(2)]center dot 4H(2)O (1) (OAc = acetate, CH3COO-), [Ni-3(salpen)(2)(OBz)(2)] (2) (OBz=benzoate, PhCOO-) and [Ni-3(salpen)(2)(OCn)(2)(CH3CN)(2)] (4) (OCn = cinnamate, PhCH=CHCOO-), H(2)salpen = tetradentate ligand, N,N'-bis(salicylidene)-1,3-pentanediamine have been synthesized and characterized structurally and magnetically. The choice of solvent for growing single crystal was made by inspecting the morphology of the initially obtained solids with the help of SEM study. The magnetic properties of a closely related complex, [Ni-3(salpen)(2)(OPh)(2)(EtOH)] (3) (OPh = phenyl acetate, PhCH2COO-) whose structure and solution properties have been reported recently, has also been studied here. The structural analyses reveal that both phenoxo and carboxylate bridging are present in all the complexes and the three Ni(II) atoms remain in linear disposition. Although the Schiff base ligand and the syn-syn bridging bidentate mode of the carboxylate group remain the same in complexes 1-4, the change of alkyl/aryl group of the carboxylates brings about systematic variations between six- and five-coordination in the geometry of the terminal Ni(II) centres of the trinuclear units. The steric demand as well as hydrophobic nature of the alkyl/aryl group of the carboxylate is found to play a crucial role in the tuning of the geometry. Variable-temperature (2-300 K) magnetic susceptibility measurements show that complexes 1-4 are antiferromagnetically coupled (J = -3.2(1), -4.6(1). -3.2(1) and -2.8(1) cm(-1) in 1-4, respectively). Calculations of the zero-field splitting parameter indicate that the values of D for complexes 1-4 are in the high range (D = +9.1(2), +14.2(2), +9.8(2) and +8.6(1) cm(-1) for 1-4, respectively). The highest D value of +14.2(2) and +9.8(2) cm(-1) for complexes 2 and 3, respectively, are consistent with the pentacoordinated geometry of the two terminal nickel(II) ions in 2 and one terminal nickel(II) ion in 3. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Stable isotope labeling combined with MS is a powerful method for measuring relative protein abundances, for instance, by differential metabolic labeling of some or all amino acids with N-14 and N-15 in cell culture or hydroponic media. These and most other types of quantitative proteomics experiments using high-throughput technologies, such as LC-MS/MS, generate large amounts of raw MS data. This data needs to be processed efficiently and automatically, from the mass spectrometer to statistically evaluated protein identifications and abundance ratios. This paper describes in detail an approach to the automated analysis of Uniformly N-14/N-15-labeled proteins using MASCOT peptide identification in conjunction with the trans-proteomic pipeline (TPP) and a few scripts to integrate the analysis workflow. Two large proteomic datasets from uniformly labeled Arabidopsis thaliana were used to illustrate the analysis pipeline. The pipeline can be fully automated and uses only common or freely available software.
Resumo:
Ulcerative colitis (UC) is characterized by impairment of the epithelial barrier and the formation of ulcer-type lesions, which result in local leaks and generalized alterations of mucosal tight junctions. Ultimately, this results in increased basal permeability. Although disruption of the epithelial barrier in the gut is a hallmark of inflammatory bowel disease and intestinal infections, it remains unclear whether barrier breakdown is an initiating event of UC or rather a consequence of an underlying inflammation, evidenced by increased production of proinflammatory cytokines. UC is less common in smokers, suggesting that the nicotine in cigarettes may ameliorate disease severity. The mechanism behind this therapeutic effect is still not fully understood, and indeed it remains unclear if nicotine is the true protective agent in cigarettes. Nicotine is metabolized in the body into a variety of metabolites and can also be degraded to form various breakdown products. It is possible these metabolites or degradation products may be the true protective or curative agents. A greater understanding of the pharmacodynamics and kinetics of nicotine in relation to the immune system and enhanced knowledge of out permeability defects in UC are required to establish the exact protective nature of nicotine and its metabolites in UC. This review suggests possible hypotheses for the protective mechanism of nicotine in UC, highlighting the relationship between gut permeability and inflammation, and indicates where in the pathogenesis of the disease nicotine may mediate its effect.
Resumo:
A new primary model based on a thermodynamically consistent first-order kinetic approach was constructed to describe non-log-linear inactivation kinetics of pressure-treated bacteria. The model assumes a first-order process in which the specific inactivation rate changes inversely with the square root of time. The model gave reasonable fits to experimental data over six to seven orders of magnitude. It was also tested on 138 published data sets and provided good fits in about 70% of cases in which the shape of the curve followed the typical convex upward form. In the remainder of published examples, curves contained additional shoulder regions or extended tail regions. Curves with shoulders could be accommodated by including an additional time delay parameter and curves with tails shoulders could be accommodated by omitting points in the tail beyond the point at which survival levels remained more or less constant. The model parameters varied regularly with pressure, which may reflect a genuine mechanistic basis for the model. This property also allowed the calculation of (a) parameters analogous to the decimal reduction time D and z, the temperature increase needed to change the D value by a factor of 10, in thermal processing, and hence the processing conditions needed to attain a desired level of inactivation; and (b) the apparent thermodynamic volumes of activation associated with the lethal events. The hypothesis that inactivation rates changed as a function of the square root of time would be consistent with a diffusion-limited process.
Resumo:
The potential of clarification questions (CQs) to act as a form of corrective input for young children's grammatical errors was examined. Corrective responses were operationalized as those occasions when child speech shifted from erroneous to correct (E -> C) contingent on a clarification question. It was predicted that E -> C sequences would prevail over shifts in the opposite direction (C -> E), as can occur in the case of nonerror-contingent CQs. This prediction was tested via a standard intervention paradigm, whereby every 60s a sequence of two clarification requests (either specific or general) was introduced into conversation with a total of 45 2- and 4-year-old children. For 10 categories of grammatical structure, E -> C sequences predominated over their C -> E counterparts, with levels of E -> C shifts increasing after two clarification questions. Children were also more reluctant to repeat erroneous forms than their correct counterparts, following the intervention of CQs. The findings provide support for Saxton's prompt hypothesis, which predicts that error-contingent CQs bear the potential to cue recall of previously acquired grammatical forms.
Resumo:
Problematic trace-antecedent relations between deep and surface structure have been a dominant theme in sentence comprehension in agrammatism. We challenge this view and propose that the comprehension in agrammatism in declarative sentences and wh-questions stems from impaired processing in logical form. We present new data from wh-questions and declarative sentences and advance a new hypothesis which we call the set partition hypothesis. We argue that elements that signal set partition operations influence sentence comprehension while trace-antecedent relations remain intact. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
In this paper new robust nonlinear model construction algorithms for a large class of linear-in-the-parameters models are introduced to enhance model robustness, including three algorithms using combined A- or D-optimality or PRESS statistic (Predicted REsidual Sum of Squares) with regularised orthogonal least squares algorithm respectively. A common characteristic of these algorithms is that the inherent computation efficiency associated with the orthogonalisation scheme in orthogonal least squares or regularised orthogonal least squares has been extended such that the new algorithms are computationally efficient. A numerical example is included to demonstrate effectiveness of the algorithms. Copyright (C) 2003 IFAC.
Resumo:
This paper presents a paralleled Two-Pass Hexagonal (TPA) algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for motion estimation. In the TPA., Motion Vectors (MV) are generated from the first-pass LHMEA and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of Macroblocks (MBs). We introduced hashtable into video processing and completed parallel implementation. We propose and evaluate parallel implementations of the LHMEA of TPA on clusters of workstations for real time video compression. It discusses how parallel video coding on load balanced multiprocessor systems can help, especially on motion estimation. The effect of load balancing for improved performance is discussed. The performance or the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms.
Resumo:
This paper presents a parallel Linear Hashtable Motion Estimation Algorithm (LHMEA). Most parallel video compression algorithms focus on Group of Picture (GOP). Based on LHMEA we proposed earlier [1][2], we developed a parallel motion estimation algorithm focus inside of frame. We divide each reference frames into equally sized regions. These regions are going to be processed in parallel to increase the encoding speed significantly. The theory and practice speed up of parallel LHMEA according to the number of PCs in the cluster are compared and discussed. Motion Vectors (MV) are generated from the first-pass LHMEA and used as predictors for second-pass Hexagonal Search (HEXBS) motion estimation, which only searches a small number of Macroblocks (MBs). We evaluated distributed parallel implementation of LHMEA of TPA for real time video compression.
Resumo:
Dense deployments of wireless local area networks (WLANs) are fast becoming a permanent feature of all developed cities around the world. While this increases capacity and coverage, the problem of increased interference, which is exacerbated by the limited number of channels available, can severely degrade the performance of WLANs if an effective channel assignment scheme is not employed. In an earlier work, an asynchronous, distributed and dynamic channel assignment scheme has been proposed that (1) is simple to implement, (2) does not require any knowledge of the throughput function, and (3) allows asynchronous channel switching by each access point (AP). In this paper, we present extensive performance evaluation of this scheme when it is deployed in the more practical non-uniform and dynamic topology scenarios. Specifically, we investigate its effectiveness (1) when APs are deployed in a nonuniform fashion resulting in some APs suffering from higher levels of interference than others and (2) when APs are effectively switched `on/off' due to the availability/lack of traffic at different times, which creates a dynamically changing network topology. Simulation results based on actual WLAN topologies show that robust performance gains over other channel assignment schemes can still be achieved even in these realistic scenarios.
Resumo:
This paper is directed to the advanced parallel Quasi Monte Carlo (QMC) methods for realistic image synthesis. We propose and consider a new QMC approach for solving the rendering equation with uniform separation. First, we apply the symmetry property for uniform separation of the hemispherical integration domain into 24 equal sub-domains of solid angles, subtended by orthogonal spherical triangles with fixed vertices and computable parameters. Uniform separation allows to apply parallel sampling scheme for numerical integration. Finally, we apply the stratified QMC integration method for solving the rendering equation. The superiority our QMC approach is proved.