985 resultados para Closed-chamber IRGA method


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The tremendous application potential of nanosized materials stays in sharp contrast to a growing number of critical reports of their potential toxicity. Applications of in vitro methods to assess nanoparticles are severely limited through difficulties in exposing cells of the respiratory tract directly to airborne engineered nanoparticles. We present a completely new approach to expose lung cells to particles generated in situ by flame spray synthesis. Cerium oxide nanoparticles from a single run were produced and simultaneously exposed to the surface of cultured lung cells inside a glovebox. Separately collected samples were used to measure hydrodynamic particle size distribution, shape, and agglomerate morphology. Cell viability was not impaired by the conditions of the glovebox exposure. The tightness of the lung cell monolayer, the mean total lamellar body volume, and the generation of oxidative DNA damage revealed a dose-dependent cellular response to the airborne engineered nanoparticles. The direct combination of production and exposure allows studying particle toxicity in a simple and reproducible way under environmental conditions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background Evaluation of anterior chamber depth (ACD) can potentially identify those patients at risk of angle-closure glaucoma. We aimed to: compare van Herick’s limbal chamber depth (LCDvh) grades with LCDorb grades calculated from the Orbscan anterior chamber angle values; determine Smith’s technique ACD and compare to Orbscan ACD; and calculate a constant for Smith’s technique using Orbscan ACD. Methods Eighty participants free from eye disease underwent LCDvh grading, Smith’s technique ACD, and Orbscan anterior chamber angle and ACD measurement. Results LCDvh overestimated grades by a mean of 0.25 (coefficient of repeatability [CR] 1.59) compared to LCDorb. Smith’s technique (constant 1.40 and 1.31) overestimated ACD by a mean of 0.33 mm (CR 0.82) and 0.12 mm (CR 0.79) respectively, compared to Orbscan. Using linear regression, we determined a constant of 1.22 for Smith’s slit-length method. Conclusions Smith’s technique (constant 1.31) provided an ACD that is closer to that found with Orbscan compared to a constant of 1.40 or LCDvh. Our findings also suggest that Smith’s technique would produce values closer to that obtained with Orbscan by using a constant of 1.22.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Improving the performance of a incident detection system was essential to minimize the effect of incidents. A new method of incident detection was brought forward in this paper based on an in-car terminal which consisted of GPS module, GSM module and control module as well as some optional parts such as airbag sensors, mobile phone positioning system (MPPS) module, etc. When a driver or vehicle discovered the freeway incident and initiated an alarm report the incident location information located by GPS, MPPS or both would be automatically send to a transport management center (TMC), then the TMC would confirm the accident with a closed-circuit television (CCTV) or other approaches. In this method, detection rate (DR), time to detect (TTD) and false alarm rate (FAR) were more important performance targets. Finally, some feasible means such as management mode, education mode and suitable accident confirming approaches had been put forward to improve these targets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A statistical modeling method to accurately determine combustion chamber resonance is proposed and demonstrated. This method utilises Markov-chain Monte Carlo (MCMC) through the use of the Metropolis-Hastings (MH) algorithm to yield a probability density function for the combustion chamber frequency and find the best estimate of the resonant frequency, along with uncertainty. The accurate determination of combustion chamber resonance is then used to investigate various engine phenomena, with appropriate uncertainty, for a range of engine cycles. It is shown that, when operating on various ethanol/diesel fuel combinations, a 20% substitution yields the least amount of inter-cycle variability, in relation to combustion chamber resonance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the study of traffic safety, expected crash frequencies across sites are generally estimated via the negative binomial model, assuming time invariant safety. Since the time invariant safety assumption may be invalid, Hauer (1997) proposed a modified empirical Bayes (EB) method. Despite the modification, no attempts have been made to examine the generalisable form of the marginal distribution resulting from the modified EB framework. Because the hyper-parameters needed to apply the modified EB method are not readily available, an assessment is lacking on how accurately the modified EB method estimates safety in the presence of the time variant safety and regression-to-the-mean (RTM) effects. This study derives the closed form marginal distribution, and reveals that the marginal distribution in the modified EB method is equivalent to the negative multinomial (NM) distribution, which is essentially the same as the likelihood function used in the random effects Poisson model. As a result, this study shows that the gamma posterior distribution from the multivariate Poisson-gamma mixture can be estimated using the NM model or the random effects Poisson model. This study also shows that the estimation errors from the modified EB method are systematically smaller than those from the comparison group method by simultaneously accounting for the RTM and time variant safety effects. Hence, the modified EB method via the NM model is a generalisable method for estimating safety in the presence of the time variant safety and the RTM effects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many applications, e.g., bioinformatics, web access traces, system utilisation logs, etc., the data is naturally in the form of sequences. People have taken great interest in analysing the sequential data and finding the inherent characteristics or relationships within the data. Sequential association rule mining is one of the possible methods used to analyse this data. As conventional sequential association rule mining very often generates a huge number of association rules, of which many are redundant, it is desirable to find a solution to get rid of those unnecessary association rules. Because of the complexity and temporal ordered characteristics of sequential data, current research on sequential association rule mining is limited. Although several sequential association rule prediction models using either sequence constraints or temporal constraints have been proposed, none of them considered the redundancy problem in rule mining. The main contribution of this research is to propose a non-redundant association rule mining method based on closed frequent sequences and minimal sequential generators. We also give a definition for the non-redundant sequential rules, which are sequential rules with minimal antecedents but maximal consequents. A new algorithm called CSGM (closed sequential and generator mining) for generating closed sequences and minimal sequential generators is also introduced. A further experiment has been done to compare the performance of generating non-redundant sequential rules and full sequential rules, meanwhile, performance evaluation of our CSGM and other closed sequential pattern mining or generator mining algorithms has also been conducted. We also use generated non-redundant sequential rules for query expansion in order to improve recommendations for infrequently purchased products.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Client owners usually need an estimate or forecast of their likely building costs in advance of detailed design in order to confirm the financial feasibility of their projects. Because of their timing in the project life cycle, these early stage forecasts are characterized by the minimal amount of information available concerning the new (target) project to the point that often only its size and type are known. One approach is to use the mean contract sum of a sample, or base group, of previous projects of a similar type and size to the project for which the estimate is needed. Bernoulli’s law of large numbers implies that this base group should be as large as possible. However, increasing the size of the base group inevitably involves including projects that are less and less similar to the target project. Deciding on the optimal number of base group projects is known as the homogeneity or pooling problem. A method of solving the homogeneity problem is described involving the use of closed form equations to compare three different sampling arrangements of previous projects for their simulated forecasting ability by a cross-validation method, where a series of targets are extracted, with replacement, from the groups and compared with the mean value of the projects in the base groups. The procedure is then demonstrated with 450 Hong Kong projects (with different project types: Residential, Commercial centre, Car parking, Social community centre, School, Office, Hotel, Industrial, University and Hospital) clustered into base groups according to their type and size.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose Endotracheal suctioning causes significant lung derecruitment. Closed suction (CS) minimizes lung volume loss during suction, and therefore, volumes are presumed to recover more quickly postsuctioning. Conflicting evidence exists regarding this. We examined the effects of open suction (OS) and CS on lung volume loss during suctioning, and recovery of end-expiratory lung volume (EELV) up to 30 minutes postsuction. Material and Methods Randomized crossover study examining 20 patients postcardiac surgery. CS and OS were performed in random order, 30 minutes apart. Lung impedance was measured during suction, and end-expiratory lung impedance was measured at baseline and postsuctioning using electrical impedance tomography. Oximetry, partial pressure of oxygen in the alveoli/fraction of inspired oxygen ratio and compliance were collected. Results Reductions in lung impedance during suctioning were less for CS than for OS (mean difference, − 905 impedance units; 95% confidence interval [CI], − 1234 to –587; P < .001). However, at all points postsuctioning, EELV recovered more slowly after CS than after OS. There were no statistically significant differences in the other respiratory parameters. Conclusions Closed suctioning minimized lung volume loss during suctioning but, counterintuitively, resulted in slower recovery of EELV postsuction compared with OS. Therefore, the use of CS cannot be assumed to be protective of lung volumes postsuctioning. Consideration should be given to restoring EELV after either suction method via a recruitment maneuver.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the overwhelming increase in the amount of texts on the web, it is almost impossible for people to keep abreast of up-to-date information. Text mining is a process by which interesting information is derived from text through the discovery of patterns and trends. Text mining algorithms are used to guarantee the quality of extracted knowledge. However, the extracted patterns using text or data mining algorithms or methods leads to noisy patterns and inconsistency. Thus, different challenges arise, such as the question of how to understand these patterns, whether the model that has been used is suitable, and if all the patterns that have been extracted are relevant. Furthermore, the research raises the question of how to give a correct weight to the extracted knowledge. To address these issues, this paper presents a text post-processing method, which uses a pattern co-occurrence matrix to find the relation between extracted patterns in order to reduce noisy patterns. The main objective of this paper is not only reducing the number of closed sequential patterns, but also improving the performance of pattern mining as well. The experimental results on Reuters Corpus Volume 1 data collection and TREC filtering topics show that the proposed method is promising.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a model-predictive control (MPC) method is detailed for the control of nonlinear systems with stability considerations. It will be assumed that the plant is described by a local input/output ARX-type model, with the control potentially included in the premise variables, which enables the control of systems that are nonlinear in both the state and control input. Additionally, for the case of set point regulation, a suboptimal controller is derived which has the dual purpose of ensuring stability and enabling finite-iteration termination of the iterative procedure used to solve the nonlinear optimization problem that is used to determine the control signal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a method for designing set-point regulation controllers for a class of underactuated mechanical systems in Port-Hamiltonian System (PHS) form. A new set of potential shape variables in closed loop is proposed, which can replace the set of open loop shape variables-the configuration variables that appear in the kinetic energy. With this choice, the closed-loop potential energy contains free functions of the new variables. By expressing the regulation objective in terms of these new potential shape variables, the desired equilibrium can be assigned and there is freedom to reshape the potential energy to achieve performance whilst maintaining the PHS form in closed loop. This complements contemporary results in the literature, which preserve the open-loop shape variables. As a case study, we consider a robotic manipulator mounted on a flexible base and compensate for the motion of the base while positioning the end effector with respect to the ground reference. We compare the proposed control strategy with special cases that correspond to other energy shaping strategies previously proposed in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel method of spontaneous generation of new adipose tissue from an existing fat flap is described. A defined volume of fat flap based on the superficial inferior epigastric vascular pedicle in the rat was elevated and inset into a hollow plastic chamber implanted subcutaneously in the groin of the rat. The chamber walls were either perforated or solid and the chambers either contained poly(D,L-lactic-co-glycolic acid) (PLGA) sponge matrix or not. The contents were analyzed after being in situ for 6 weeks. The total volume of the flap tissue in all groups except the control groups, where the flap was not inserted into the chambers, increased significantly, especially in the perforated chambers (0.08 ± 0.007 mL baseline compared to 1.2 ± 0.08 mL in the intact ones). Volume analysis of individual component tissues within the flaps revealed that the adipocyte volume increased and was at a maximum in the chambers without PLGA, where it expanded from 0.04 ± 0.003 mL at insertion to 0.5 ± 0.08 mL (1250% increase) in the perforated chambers and to 0.16 ± 0.03 mL (400% increase) in the intact chambers. Addition of PLGA scaffolds resulted in less fat growth. Histomorphometric analysis rather than simple hypertrophy documented an increased number of adipocytes. The new tissue was highly vascularized and no fat necrosis or atypical changes were observed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes our participation in the Chinese word segmentation task of CIPS-SIGHAN 2010. We implemented an n-gram mutual information (NGMI) based segmentation algorithm with the mixed-up features from unsupervised, supervised and dictionarybased segmentation methods. This algorithm is also combined with a simple strategy for out-of-vocabulary (OOV) word recognition. The evaluation for both open and closed training shows encouraging results of our system. The results for OOV word recognition in closed training evaluation were however found unsatisfactory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – In structural, earthquake and aeronautical engineering and mechanical vibration, the solution of dynamic equations for a structure subjected to dynamic loading leads to a high order system of differential equations. The numerical methods are usually used for integration when either there is dealing with discrete data or there is no analytical solution for the equations. Since the numerical methods with more accuracy and stability give more accurate results in structural responses, there is a need to improve the existing methods or develop new ones. The paper aims to discuss these issues. Design/methodology/approach – In this paper, a new time integration method is proposed mathematically and numerically, which is accordingly applied to single-degree-of-freedom (SDOF) and multi-degree-of-freedom (MDOF) systems. Finally, the results are compared to the existing methods such as Newmark’s method and closed form solution. Findings – It is concluded that, in the proposed method, the data variance of each set of structural responses such as displacement, velocity, or acceleration in different time steps is less than those in Newmark’s method, and the proposed method is more accurate and stable than Newmark’s method and is capable of analyzing the structure at fewer numbers of iteration or computation cycles, hence less time-consuming. Originality/value – A new mathematical and numerical time integration method is proposed for the computation of structural responses with higher accuracy and stability, lower data variance, and fewer numbers of iterations for computational cycles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Law is narration: it is narrative, narrator and the narrated. As a narrative, the law is constituted by a constellation of texts – from official sources such as statutes, treaties and cases, to private arrangements such as commercial contracts, deeds and parenting plans. All are a collection of stories: cases are narrative contests of facts and rights; statutes are recitations of the substantive and procedural bases for social, economic and political interactions; private agreements are plots for future relationships, whether personal or professional. As a narrator, law speaks in the language of modern liberalism. It describes its world in abstractions rather than in concrete experience, universal principles rather than individual subjectivity. It casts people into ‘parties’ to legal relationships; structures human interactions into ‘issues’ or ‘problems’; and tells individual stories within larger narrative arcs such as ‘the rule of law’ and ‘the interests of justice’. As the narrated, the law is a character in its own story. The scholarship of law, for example, is a type of story-telling with law as its central character. For positivists, still the dominant group in the legal genre, law is a closed system of formal rules with an “immanent rationality” and its own “structure, substantive content, procedure and tradition,” dedicated to finality of judgment. For scholars inspired by the interpretative tradition in the humanities, law is a more ambivalent character, susceptible to influences from outside its realm and masking a hidden ideological agenda under its cloak of universality and neutrality. For social scientists, law is a protagonist on a wider social stage, impacting on society, the economy and the polity is often surprising ways.