979 resultados para indirect method
Resumo:
BACKGROUND Approximately 50% of patients with stage 3 Chronic Kidney Disease are 25-hydroxyvitamin D insufficient, and this prevalence increases with falling glomerular filtration rate. Vitamin D is now recognised as having pleiotropic roles beyond bone and mineral homeostasis, with the vitamin D receptor and metabolising machinery identified in multiple tissues. Worryingly, recent observational data has highlighted an association between hypovitaminosis D and increased cardiovascular mortality, possibly mediated via vitamin D effects on insulin resistance and inflammation. The main hypothesis of this study is that oral Vitamin D supplementation will ameliorate insulin resistance in patients with Chronic Kidney Disease stage 3 when compared to placebo. Secondary hypotheses will test whether this is associated with decreased inflammation and bone/adipocyte-endocrine dysregulation. METHODS/DESIGN This study is a single-centre, double-blinded, randomised, placebo-controlled trial. Inclusion criteria include; estimated glomerular filtration rate 30-59 ml/min/1.73 m(2); aged >or=18 on entry to study; and serum 25-hydroxyvitamin D levels <75 nmol/L. Patients will be randomised 1:1 to receive either oral cholecalciferol 2000IU/day or placebo for 6 months. The primary outcome will be an improvement in insulin sensitivity, measured by hyperinsulinaemic euglycaemic clamp. Secondary outcome measures will include serum parathyroid hormone, cytokines (Interleukin-1beta, Interleukin-6, Tumour Necrosis Factor alpha), adiponectin (total and High Molecular Weight), osteocalcin (carboxylated and under-carboxylated), peripheral blood mononuclear cell Nuclear Factor Kappa-B p65 binding activity, brachial artery reactivity, aortic pulse wave velocity and waveform analysis, and indirect calorimetry. All outcome measures will be performed at baseline and end of study. DISCUSSION To date, no randomised controlled trial has been performed in pre-dialysis CKD patients to study the correlation between vitamin D status with supplementation, insulin resistance and markers of adverse cardiovascular risk. We remain hopeful that cholecalciferol may be a safe intervention, with health benefits beyond those related to bone-mineral homeostasis. TRIAL REGISTRATION Australian and New Zealand Clinical Trials Registry ACTRN12609000246280.
Resumo:
The choice of ethanol (C2H5OH) as carbon source in the Chemical Vapor Deposition (CVD) of graphene on copper foils can be considered as an attractive alternative among the commonly used hydrocarbons, such as methane (CH4) [1]. Ethanol, a safe, low cost and easy handling liquid precursor, offers fast and efficient growth kinetics with the synthesis of fullyformed graphene films in just few seconds [2]. In previous studies of graphene growth from ethanol, various research groups explored temperature ranges lower than 1000 °C, usually reported for methane-assisted CVD. In particular, the 650–850 °C and 900 °C ranges were investigated, respectively for 5 and 30 min growth time [3, 4]. Recently, our group reported the growth of highly-crystalline, few-layer graphene by ethanol-CVD in hydrogen flow (1– 100 sccm) at high temperatures (1000–1070 °C) using growth times typical of CH4-assisted synthesis (10–30 min) [5]. Furthermore, a synthesis time between 20 and 60 s in the same conditions was explored too. In such fast growth we demonstrated that fully-formed graphene films can be grown by exposing copper foils to a low partial pressure of ethanol (up to 2 Pa) in just 20 s [6] and we proposed that the rapid growth is related to an increase of the Cu catalyst efficiency due weak oxidizing nature of ethanol. Thus, the employment of such liquid precursor, in small concentrations, together with a reduced time of growth and very low pressure leads to highly efficient graphene synthesis. By this way, the complete coverage of a copper catalyst surface with high spatial uniformity can be obtained in a considerably lower time than when using methane.
Resumo:
We have evaluated techniques of estimating animal density through direct counts using line transects during 1988-92 in the tropical deciduous forests of Mudumalai Sanctuary in southern India for four species of large herbivorous mammals, namely, chital (Axis axis), sambar (Cervus unicolor), Asian elephant (Elephas maximus) and gaur (Bos gauras). Density estimates derived from the Fourier Series and the Half-Normal models consistently had the lowest coefficient of variation. These two models also generated similar mean density estimates. For the Fourier Series estimator, appropriate cut-off widths for analysing line transect data for the four species are suggested. Grouping data into various distance classes did not produce any appreciable differences in estimates of mean density or their variances, although model fit is generally better when data are placed in fewer groups. The sampling effort needed to achieve a desired precision (coefficient of variation) in the density estimate is derived. A sampling effort of 800 km of transects returned a 10% coefficient of variation on estimate for chital; for the other species a higher effort was needed to achieve this level of precision. There was no statistically significant relationship between detectability of a group and the size of the group for any species. Density estimates along roads were generally significantly different from those in the interior af the forest, indicating that road-side counts may not be appropriate for most species.
Resumo:
This paper describes an algorithm for ``direct numerical integration'' of the initial value Differential-Algebraic Inequalities (DAI) in a time stepping fashion using a sequential quadratic programming (SQP) method solver for detecting and satisfying active path constraints at each time step. The activation of a path constraint generally increases the condition number of the active discretized differential algebraic equation's (DAE) Jacobian and this difficulty is addressed by a regularization property of the alpha method. The algorithm is locally stable when index 1 and index 2 active path constraints and bounds are active. Subject to available regularization it is seen to be stable for active index 3 active path constraints in the numerical examples. For the high index active path constraints, the algorithm uses a user-selectable parameter to perturb the smaller singular values of the Jacobian with a view to reducing the condition number so that the simulation can proceed. The algorithm can be used as a relatively cheaper estimation tool for trajectory and control planning and in the context of model predictive control solutions. It can also be used to generate initial guess values of optimization variables used as input to inequality path constrained dynamic optimization problems. The method is illustrated with examples from space vehicle trajectory and robot path planning.
Resumo:
Industrial ecology is an important field of sustainability science. It can be applied to study environmental problems in a policy relevant manner. Industrial ecology uses ecosystem analogy; it aims at closing the loop of materials and substances and at the same time reducing resource consumption and environmental emissions. Emissions from human activities are related to human interference in material cycles. Carbon (C), nitrogen (N) and phosphorus (P) are essential elements for all living organisms, but in excess have negative environmental impacts, such as climate change (CO2, CH4 N2O), acidification (NOx) and eutrophication (N, P). Several indirect macro-level drivers affect emissions change. Population and affluence (GDP/capita) often act as upward drivers for emissions. Technology, as emissions per service used, and consumption, as economic intensity of use, may act as drivers resulting in a reduction in emissions. In addition, the development of country-specific emissions is affected by international trade. The aim of this study was to analyse changes in emissions as affected by macro-level drivers in different European case studies. ImPACT decomposition analysis (IPAT identity) was applied as a method in papers I III. The macro-level perspective was applied to evaluate CO2 emission reduction targets (paper II) and the sharing of greenhouse gas emission reduction targets (paper IV) in the European Union (EU27) up to the year 2020. Data for the study were mainly gathered from official statistics. In all cases, the results were discussed from an environmental policy perspective. The development of nitrogen oxide (NOx) emissions was analysed in the Finnish energy sector during a long time period, 1950 2003 (paper I). Finnish emissions of NOx began to decrease in the 1980s as the progress in technology in terms of NOx/energy curbed the impact of the growth in affluence and population. Carbon dioxide (CO2) emissions related to energy use during 1993 2004 (paper II) were analysed by country and region within the European Union. Considering energy-based CO2 emissions in the European Union, dematerialization and decarbonisation did occur, but not sufficiently to offset population growth and the rapidly increasing affluence during 1993 2004. The development of nitrogen and phosphorus load from aquaculture in relation to salmonid consumption in Finland during 1980 2007 was examined, including international trade in the analysis (paper III). A regional environmental issue, eutrophication of the Baltic Sea, and a marginal, yet locally important source of nutrients was used as a case. Nutrient emissions from Finnish aquaculture decreased from the 1990s onwards: although population, affluence and salmonid consumption steadily increased, aquaculture technology improved and the relative share of imported salmonids increased. According to the sustainability challenge in industrial ecology, the environmental impact of the growing population size and affluence should be compensated by improvements in technology (emissions/service used) and with dematerialisation. In the studied cases, the emission intensity of energy production could be lowered for NOx by cleaning the exhaust gases. Reorganization of the structure of energy production as well as technological innovations will be essential in lowering the emissions of both CO2 and NOx. Regarding the intensity of energy use, making the combustion of fuels more efficient and reducing energy use are essential. In reducing nutrient emissions from Finnish aquaculture to the Baltic Sea (paper III) through technology, limits of biological and physical properties of cultured fish, among others, will eventually be faced. Regarding consumption, salmonids are preferred to many other protein sources. Regarding trade, increasing the proportion of imports will outsource the impacts. Besides improving technology and dematerialization, other viewpoints may also be needed. Reducing the total amount of nutrients cycling in energy systems and eventually contributing to NOx emissions needs to be emphasized. Considering aquaculture emissions, nutrient cycles can be partly closed through using local fish as feed replacing imported feed. In particular, the reduction of CO2 emissions in the future is a very challenging task when considering the necessary rates of dematerialisation and decarbonisation (paper II). Climate change mitigation may have to focus on other greenhouse gases than CO2 and on the potential role of biomass as a carbon sink, among others. The global population is growing and scaling up the environmental impact. Population issues and growing affluence must be considered when discussing emission reductions. Climate policy has only very recently had an influence on emissions, and strong actions are now called for climate change mitigation. Environmental policies in general must cover all the regions related to production and impacts in order to avoid outsourcing of emissions and leakage effects. The macro-level drivers affecting changes in emissions can be identified with the ImPACT framework. Statistics for generally known macro-indicators are currently relatively well available for different countries, and the method is transparent. In the papers included in this study, a similar method was successfully applied in different types of case studies. Using transparent macro-level figures and a simple top-down approach are also appropriate in evaluating and setting international emission reduction targets, as demonstrated in papers II and IV. The projected rates of population and affluence growth are especially worth consideration in setting targets. However, sensitivities in calculations must be carefully acknowledged. In the basic form of the ImPACT model, the economic intensity of consumption and emission intensity of use are included. In seeking to examine consumption but also international trade in more detail, imports were included in paper III. This example demonstrates well how outsourcing of production influences domestic emissions. Country-specific production-based emissions have often been used in similar decomposition analyses. Nevertheless, trade-related issues must not be ignored.
Resumo:
Quantification of pyridoxal-5´-phosphate (PLP) in biological samples is challenging due to the presence of endogenous PLP in matrices used for preparation of calibrators and quality control samples (QCs). Hence, we have developed an LC-MS/MS method for accurate and precise measurement of the concentrations of PLP in samples (20 µL) of human whole blood that addresses this issue by using a surrogate matrix and minimizing the matrix effect. We used a surrogate matrix comprising 2% bovine serum albumin (BSA) in phosphate buffer saline (PBS) for making calibrators, QCs and the concentrations were adjusted to include the endogenous PLP concentrations in the surrogate matrix according to the method of standard addition. PLP was separated from the other components of the sample matrix using protein precipitation with trichloroacetic acid 10% w/v. After centrifugation, supernatant were injected directly into the LC-MS/MS system. Calibration curves were linear and recovery was > 92%. QCs were accurate, precise, stable for four freeze-thaw cycles, and following storage at room temperature for 17h or at -80 °C for 3 months. There was no significant matrix effect using 9 different individual human blood samples. Our novel LC-MS/MS method has satisfied all of the criteria specified in the 2012 EMEA guideline on bioanalytical method validation.
Resumo:
The main obstacle for the application of high quality diamond-like carbon (DLC) coatings has been the lack of adhesion to the substrate as the coating thickness is increased. The aim of this study was to improve the filtered pulsed arc discharge (FPAD) method. With this method it is possible to achieve high DLC coating thicknesses necessary for practical applications. The energy of the carbon ions was measured with an optoelectronic time-of-flight method. An in situ cathode polishing system used for stabilizing the process yield and the carbon ion energies is presented. Simultaneously the quality of the coatings can be controlled. To optimise the quality of the deposition process a simple, fast and inexpensive method using silicon wafers as test substrates was developed. This method was used for evaluating the suitability of a simplified arc-discharge set-up for the deposition of the adhesion layer of DLC coatings. A whole new group of materials discovered by our research group, the diamond-like carbon polymer hybrid (DLC-p-h) coatings, is also presented. The parent polymers used in these novel coatings were polydimethylsiloxane (PDMS) and polytetrafluoroethylene (PTFE). The energy of the plasma ions was found to increase when the anode-cathode distance and the arc voltage were increased. A constant deposition rate for continuous coating runs was obtained with an in situ cathode polishing system. The novel DLC-p-h coatings were found to be water and oil repellent and harder than any polymers. The lowest sliding angle ever measured from a solid surface, 0.15 ± 0.03°, was measured on a DLC-PDMS-h coating. In the FPAD system carbon ions can be accelerated to high energies (≈ 1 keV) necessary for the optimal adhesion (the substrate is broken in the adhesion and quality test) of ultra thick (up to 200 µm) DLC coatings by increasing the anode-cathode distance and using high voltages (up to 4 kV). An excellent adhesion can also be obtained with the simplified arc-discharge device. To maintain high process yield (5µm/h over a surface area of 150 cm2) and to stabilize the carbon ion energies and the high quality (sp3 fraction up to 85%) of the resulting coating, an in situ cathode polishing system must be used. DLC-PDMS-h coating is the superior candidate coating material for anti-soiling applications where also hardness is required.
Resumo:
Sequence motifs occurring in a particular order in proteins or DNA have been proved to be of biological interest. In this paper, a new method to locate the occurrences of up to five user-defined motifs in a specified order in large proteins and in nucleotide sequence databases is proposed. It has been designed using the concept of quantifiers in regular expressions and linked lists for data storage. The application of this method includes the extraction of relevant consensus regions from biological sequences. This might be useful in clustering of protein families as well as to study the correlation between positions of motifs and their functional sites in DNA sequences.
Resumo:
A new form of a multi-step transversal linearization (MTL) method is developed and numerically explored in this study for a numeric-analytical integration of non-linear dynamical systems under deterministic excitations. As with other transversal linearization methods, the present version also requires that the linearized solution manifold transversally intersects the non-linear solution manifold at a chosen set of points or cross-section in the state space. However, a major point of departure of the present method is that it has the flexibility of treating non-linear damping and stiffness terms of the original system as damping and stiffness terms in the transversally linearized system, even though these linearized terms become explicit functions of time. From this perspective, the present development is closely related to the popular practice of tangent-space linearization adopted in finite element (FE) based solutions of non-linear problems in structural dynamics. The only difference is that the MTL method would require construction of transversal system matrices in lieu of the tangent system matrices needed within an FE framework. The resulting time-varying linearized system matrix is then treated as a Lie element using Magnus’ characterization [W. Magnus, On the exponential solution of differential equations for a linear operator, Commun. Pure Appl. Math., VII (1954) 649–673] and the associated fundamental solution matrix (FSM) is obtained through repeated Lie-bracket operations (or nested commutators). An advantage of this approach is that the underlying exponential transformation could preserve certain intrinsic structural properties of the solution of the non-linear problem. Yet another advantage of the transversal linearization lies in the non-unique representation of the linearized vector field – an aspect that has been specifically exploited in this study to enhance the spectral stability of the proposed family of methods and thus contain the temporal propagation of local errors. A simple analysis of the formal orders of accuracy is provided within a finite dimensional framework. Only a limited numerical exploration of the method is presently provided for a couple of popularly known non-linear oscillators, viz. a hardening Duffing oscillator, which has a non-linear stiffness term, and the van der Pol oscillator, which is self-excited and has a non-linear damping term.
Resumo:
BaZr0.8Y0.2O3- (BZY)-NiO composite powders with different BZY-NiO weight ratios were prepared by a combustion method as anodes for proton-conducting solid oxide fuel cells (SOFCs). After heating to 1100C for 6 h, the composite powders were made of a well-dispersed mixture of two phases, BZY and NiO. Chemical stability tests showed that the BZY-NiO anodic powders had good stability against CO2, whereas comparative tests under the same conditions showed degradation for BaCe0.7Zr 0.1Y0.2O3--NiO, which is at present the most used anode material for proton-conducting SOFCs. Area specific resistance (ASR) measurements for BZY-NiO anodes showed that their electrochemical performance depended on the BZY-NiO weight ratio. The best performance was obtained for the anode containing 50 wt BZY and 50 wt NiO, which showed the smallest ASR values in the whole testing temperature range (0.37 cm2 at 600C). The 50 wt BZY and 50 wt NiO anode prepared by combustion also showed superior performance than that of the BZY-NiO anode conventionally made by a mechanical mixing route, as well as that of Pt.
Resumo:
The concept of domain integral used extensively for J integral has been applied in this work for the formulation of J(2) integral for linear elastic bimaterial body containing a crack at the interface and subjected to thermal loading. It is shown that, in the presence of thermal stresses, the J(k) domain integral over a closed path, which does not enclose singularities, is a function of temperature and body force. A method is proposed to compute the stress intensity factors for bimaterial interface crack subjected to thermal loading by combining this domain integral with the J(k) integral. The proposed method is validated by solving standard problems with known solutions.
Resumo:
Scalable video coding (SVC) is an emerging standard built on the success of advanced video coding standard (H.264/AVC) by the Joint video team (JVT). Motion compensated temporal filtering (MCTF) and Closed loop hierarchical B pictures (CHBP) are two important coding methods proposed during initial stages of standardization. Either of the coding methods, MCTF/CHBP performs better depending upon noise content and characteristics of the sequence. This work identifies other characteristics of the sequences for which performance of MCTF is superior to that of CHBP and presents a method to adaptively select either of MCTF and CHBP coding methods at the GOP level. This method, referred as "Adaptive Decomposition" is shown to provide better R-D performance than of that by using MCTF or CRBP only. Further this method is extended to non-scalable coders.
Resumo:
The need for paying with mobile devices has urged the development of payment systems for mobile electronic commerce. In this paper we have considered two important abuses in electronic payments systems for detection. The fraud, which is an intentional deception accomplished to secure an unfair gain, and an intrusion which are any set of actions that attempt to compromise the integrity, confidentiality or availability of a resource. Most of the available fraud and intrusion detection systems for e-payments are specific to the systems where they have been incorporated. This paper proposes a generic model called as Activity-Event-Symptoms(AES) model for detecting fraud and intrusion attacks which appears during payment process in the mobile commerce environment. The AES model is designed to identify the symptoms of fraud and intrusions by observing various events/transactions occurs during mobile commerce activity. The symptoms identification is followed by computing the suspicion factors for event attributes, and the certainty factor for a fraud and intrusion is generated using these suspicion factors. We have tested the proposed system by conducting various case studies, on the in-house established mobile commerce environment over wired and wire-less networks test bed.
Resumo:
Non-standard finite difference methods (NSFDM) introduced by Mickens [Non-standard Finite Difference Models of Differential Equations, World Scientific, Singapore, 1994] are interesting alternatives to the traditional finite difference and finite volume methods. When applied to linear hyperbolic conservation laws, these methods reproduce exact solutions. In this paper, the NSFDM is first extended to hyperbolic systems of conservation laws, by a novel utilization of the decoupled equations using characteristic variables. In the second part of this paper, the NSFDM is studied for its efficacy in application to nonlinear scalar hyperbolic conservation laws. The original NSFDMs introduced by Mickens (1994) were not in conservation form, which is an important feature in capturing discontinuities at the right locations. Mickens [Construction and analysis of a non-standard finite difference scheme for the Burgers–Fisher equations, Journal of Sound and Vibration 257 (4) (2002) 791–797] recently introduced a NSFDM in conservative form. This method captures the shock waves exactly, without any numerical dissipation. In this paper, this algorithm is tested for the case of expansion waves with sonic points and is found to generate unphysical expansion shocks. As a remedy to this defect, we use the strategy of composite schemes [R. Liska, B. Wendroff, Composite schemes for conservation laws, SIAM Journal of Numerical Analysis 35 (6) (1998) 2250–2271] in which the accurate NSFDM is used as the basic scheme and localized relaxation NSFDM is used as the supporting scheme which acts like a filter. Relaxation schemes introduced by Jin and Xin [The relaxation schemes for systems of conservation laws in arbitrary space dimensions, Communications in Pure and Applied Mathematics 48 (1995) 235–276] are based on relaxation systems which replace the nonlinear hyperbolic conservation laws by a semi-linear system with a stiff relaxation term. The relaxation parameter (λ) is chosen locally on the three point stencil of grid which makes the proposed method more efficient. This composite scheme overcomes the problem of unphysical expansion shocks and captures the shock waves with an accuracy better than the upwind relaxation scheme, as demonstrated by the test cases, together with comparisons with popular numerical methods like Roe scheme and ENO schemes.
Resumo:
In our earlier work [1], we employed MVDR (minimum variance distortionless response) based spectral estimation instead of modified-linear prediction method [2] in pitch modification. Here, we use the Bauer method of MVDR spectral factorization, leading to a causal inverse filter rather than a noncausal filter setup with MVDR spectral estimation [1]. Further, this is employed to obtain source (or residual) signal from pitch synchronous speech frames. The residual signal is resampled using DCT/IDCT depending on the target pitch scale factor. Finally, forward filters realized from the above factorization are used to get pitch modified speech. The modified speech is evaluated subjectively by 10 listeners and mean opinion scores (MOS) are tabulated. Further, modified bark spectral distortion measure is also computed for objective evaluation of performance. We find that the proposed algorithm performs better compared to time domain pitch synchronous overlap [3] and modified-LP method [2]. A good MOS score is achieved with the proposed algorithm compared to [1] with a causal inverse and forward filter setup.