933 resultados para Parametric Linear System
Resumo:
This paper proposes Poisson log-linear multilevel models to investigate population variability in sleep state transition rates. We specifically propose a Bayesian Poisson regression model that is more flexible, scalable to larger studies, and easily fit than other attempts in the literature. We further use hierarchical random effects to account for pairings of individuals and repeated measures within those individuals, as comparing diseased to non-diseased subjects while minimizing bias is of epidemiologic importance. We estimate essentially non-parametric piecewise constant hazards and smooth them, and allow for time varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming piecewise constant hazards. This relationship allows us to synthesize two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed.
Resumo:
BACKGROUND: The Anesthetic Conserving Device (AnaConDa) uncouples delivery of a volatile anesthetic (VA) from fresh gas flow (FGF) using a continuous infusion of liquid volatile into a modified heat-moisture exchanger capable of adsorbing VA during expiration and releasing adsorbed VA during inspiration. It combines the simplicity and responsiveness of high FGF with low agent expenditures. We performed in vitro characterization of the device before developing a population pharmacokinetic model for sevoflurane administration with the AnaConDa, and retrospectively testing its performance (internal validation). MATERIALS AND METHODS: Eighteen females and 20 males, aged 31-87, BMI 20-38, were included. The end-tidal concentrations were varied and recorded together with the VA infusion rates into the device, ventilation and demographic data. The concentration-time course of sevoflurane was described using linear differential equations, and the most suitable structural model and typical parameter values were identified. The individual pharmacokinetic parameters were obtained and tested for covariate relationships. Prediction errors were calculated. RESULTS: In vitro studies assessed the contribution of the device to the pharmacokinetic model. In vivo, the sevoflurane concentration-time courses on the patient side of the AnaConDa were adequately described with a two-compartment model. The population median absolute prediction error was 27% (interquartile range 13-45%). CONCLUSION: The predictive performance of the two-compartment model was similar to that of models accepted for TCI administration of intravenous anesthetics, supporting open-loop administration of sevoflurane with the AnaConDa. Further studies will focus on prospective testing and external validation of the model implemented in a target-controlled infusion device.
Resumo:
When a single brush-less dc motor is fed by an inverter with a sensor-less algorithm embedded in the switching controller, the system exhibits a linear and stable output in terms of the speed and torque. However, with two motors modulated by the same inverter, the system is unstable and rendered useless for a steady application, unless provided with some resistive damping on the supply lines. The project discusses and analysis the stability of such a system through simulations and hardware demonstrations and also will discuss a method to derive the values of these damping.
Resumo:
Neuromorphic computing has become an emerging field in wide range of applications. Its challenge lies in developing a brain-inspired architecture that can emulate human brain and can work for real time applications. In this report a flexible neural architecture is presented which consists of 128 X 128 SRAM crossbar memory and 128 spiking neurons. For Neuron, digital integrate and fire model is used. All components are designed in 45nm technology node. The core can be configured for certain Neuron parameters, Axon types and synapses states and are fully digitally implemented. Learning for this architecture is done offline. To train this circuit a well-known algorithm Restricted Boltzmann Machine (RBM) is used and linear classifiers are trained at the output of RBM. Finally, circuit was tested for handwritten digit recognition application. Future prospects for this architecture are also discussed.
Resumo:
The characteristics of the traditional linear economic model are high consumption, high emission and low efficiency. Economic development is still largely at the expense of the environment and requires a natural resource investment. This can realize rapid economic development but resource depletion and environmental pollution become increasingly serious. In the 1990's a new economic model, circular economics, began to enter our vision. The circular economy maximizes production and minimizes the impact of economic activities on the ecological environment through organizing the activities through the closed-loop feedback cycle of "resources - production - renewable resource". Circular economy is a better way to solve the contradictions between the economic development and resource shortages. Developing circular economy has become the major strategic initiatives to achieving sustainable development in countries all over the world. The evaluation of the development of circular economics is a necessary step for regional circular economy development. Having a quantitative evaluation of circular economy can better monitor and reveal the contradictions and problems in the process of the development of recycling economy. This thesis will: 1) Create an evaluation model framework and new types of industries and 2) Make an evaluation of the Shanghai circular economy currently to analyze the situation of Shanghai in the development of circular economy. I will then propose suggestions about the structure and development of Shanghai circular economy.
Resumo:
Wireless sensor network is an emerging research topic due to its vast and ever-growing applications. Wireless sensor networks are made up of small nodes whose main goal is to monitor, compute and transmit data. The nodes are basically made up of low powered microcontrollers, wireless transceiver chips, sensors to monitor their environment and a power source. The applications of wireless sensor networks range from basic household applications, such as health monitoring, appliance control and security to military application, such as intruder detection. The wide spread application of wireless sensor networks has brought to light many research issues such as battery efficiency, unreliable routing protocols due to node failures, localization issues and security vulnerabilities. This report will describe the hardware development of a fault tolerant routing protocol for railroad pedestrian warning system. The protocol implemented is a peer to peer multi-hop TDMA based protocol for nodes arranged in a linear zigzag chain arrangement. The basic working of the protocol was derived from Wireless Architecture for Hard Real-Time Embedded Networks (WAHREN).
Resumo:
This paper treats the problem of setting the inventory level and optimizing the buffer allocation of closed-loop flow lines operating under the constant-work-in-process (CONWIP) protocol. We solve a very large but simple linear program that models an entire simulation run of a closed-loop flow line in discrete time to determine a production rate estimate of the system. This approach introduced in Helber, Schimmelpfeng, Stolletz, and Lagershausen (2011) for open flow lines with limited buffer capacities is extended to closed-loop CONWIP flow lines. Via this method, both the CONWIP level and the buffer allocation can be optimized simultaneously. The first part of a numerical study deals with the accuracy of the method. In the second part, we focus on the relationship between the CONWIP inventory level and the short-term profit. The accuracy of the method turns out to be best for such configurations that maximize production rate and/or short-term profit.
Resumo:
Master production schedule (MPS) plays an important role in an integrated production planning system. It converts the strategic planning defined in a production plan into the tactical operation execution. The MPS is also known as a tool for top management to control over manufacture resources and becomes input of the downstream planning levels such as material requirement planning (MRP) and capacity requirement planning (CRP). Hence, inappropriate decision on the MPS development may lead to infeasible execution, which ultimately causes poor delivery performance. One must ensure that the proposed MPS is valid and realistic for implementation before it is released to real manufacturing system. In practice, where production environment is stochastic in nature, the development of MPS is no longer simple task. The varying processing time, random event such as machine failure is just some of the underlying causes of uncertainty that may be hardly addressed at planning stage so that in the end the valid and realistic MPS is tough to be realized. The MPS creation problem becomes even more sophisticated as decision makers try to consider multi-objectives; minimizing inventory, maximizing customer satisfaction, and maximizing resource utilization. This study attempts to propose a methodology for MPS creation which is able to deal with those obstacles. This approach takes into account uncertainty and makes trade off among conflicting multi-objectives at the same time. It incorporates fuzzy multi-objective linear programming (FMOLP) and discrete event simulation (DES) for MPS development.
Resumo:
We have developed an assay for single strand DNA or RNA detection which is based on the homo-DNA templated Staudinger reduction of the profluorophore rhodamine-azide. The assay is based on a three component system, consisting of a homo-DNA/DNA hybrid probe, a set of homo-DNA reporter strands and the target DNA or RNA. We present two different formats of the assay (Omega probe and linear probe) in which the linear probe was found to perform best with catalytic turnover of the reporter strands (TON: 8) and a match/mismatch discrimination of up to 19. The advantage of this system is that the reporting (homo-DNA) and sensing (DNA) domain are decoupled from each other since the two pairing systems are bioorthogonal. This allows independent optimization of either domain which may lead to higher selectivity in in vivo imaging.
Resumo:
OBJECTIVES We sought to analyze the time course of atrial fibrillation (AF) episodes before and after circular plus linear left atrial ablation and the percentage of patients with complete freedom from AF after ablation by using serial seven-day electrocardiograms (ECGs). BACKGROUND The curative treatment of AF targets the pathophysiological corner stones of AF (i.e., the initiating triggers and/or the perpetuation of AF). The pathophysiological complexity of both may not result in an "all-or-nothing" response but may modify number and duration of AF episodes. METHODS In patients with highly symptomatic AF, circular plus linear ablation lesions were placed around the left and right pulmonary veins, between the two circles, and from the left circle to the mitral annulus using the electroanatomic mapping system. Repetitive continuous 7-day ECGs administered before and after catheter ablation were used for rhythm follow-up. RESULTS In 100 patients with paroxysmal (n = 80) and persistent (n = 20) AF, relative duration of time spent in AF significantly decreased over time (35 +/- 37% before ablation, 26 +/- 41% directly after ablation, and 10 +/- 22% after 12 months). Freedom from AF stepwise increased in patients with paroxysmal AF and after 12 months measured at 88% or 74% depending on whether 24-h ECG or 7-day ECG was used. Complete pulmonary vein isolation was demonstrated in <20% of the circular lesions. CONCLUSIONS The results obtained in patients with AF treated with circular plus linear left atrial lesions strongly indicate that substrate modification is the main underlying pathophysiologic mechanism and that it results in a delayed cure instead of an immediate cure.
Resumo:
Localized short-echo-time (1)H-MR spectra of human brain contain contributions of many low-molecular-weight metabolites and baseline contributions of macromolecules. Two approaches to model such spectra are compared and the data acquisition sequence, optimized for reproducibility, is presented. Modeling relies on prior knowledge constraints and linear combination of metabolite spectra. Investigated was what can be gained by basis parameterization, i.e., description of basis spectra as sums of parametric lineshapes. Effects of basis composition and addition of experimentally measured macromolecular baselines were investigated also. Both fitting methods yielded quantitatively similar values, model deviations, error estimates, and reproducibility in the evaluation of 64 spectra of human gray and white matter from 40 subjects. Major advantages of parameterized basis functions are the possibilities to evaluate fitting parameters separately, to treat subgroup spectra as independent moieties, and to incorporate deviations from straightforward metabolite models. It was found that most of the 22 basis metabolites used may provide meaningful data when comparing patient cohorts. In individual spectra, sums of closely related metabolites are often more meaningful. Inclusion of a macromolecular basis component leads to relatively small, but significantly different tissue content for most metabolites. It provides a means to quantitate baseline contributions that may contain crucial clinical information.
Resumo:
Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.
Resumo:
A patient classification system was developed integrating a patient acuity instrument with a computerized nursing distribution method based on a linear programming model. The system was designed for real-time measurement of patient acuity (workload) and allocation of nursing personnel to optimize the utilization of resources.^ The acuity instrument was a prototype tool with eight categories of patients defined by patient severity and nursing intensity parameters. From this tool, the demand for nursing care was defined in patient points with one point equal to one hour of RN time. Validity and reliability of the instrument was determined as follows: (1) Content validity by a panel of expert nurses; (2) predictive validity through a paired t-test analysis of preshift and postshift categorization of patients; (3) initial reliability by a one month pilot of the instrument in a practice setting; and (4) interrater reliability by the Kappa statistic.^ The nursing distribution system was a linear programming model using a branch and bound technique for obtaining integer solutions. The objective function was to minimize the total number of nursing personnel used by optimally assigning the staff to meet the acuity needs of the units. A penalty weight was used as a coefficient of the objective function variables to define priorities for allocation of staff.^ The demand constraints were requirements to meet the total acuity points needed for each unit and to have a minimum number of RNs on each unit. Supply constraints were: (1) total availability of each type of staff and the value of that staff member (value was determined relative to that type of staff's ability to perform the job function of an RN (i.e., value for eight hours RN = 8 points, LVN = 6 points); (2) number of personnel available for floating between units.^ The capability of the model to assign staff quantitatively and qualitatively equal to the manual method was established by a thirty day comparison. Sensitivity testing demonstrated appropriate adjustment of the optimal solution to changes in penalty coefficients in the objective function and to acuity totals in the demand constraints.^ Further investigation of the model documented: correct adjustment of assignments in response to staff value changes; and cost minimization by an addition of a dollar coefficient to the objective function. ^
Resumo:
Objective. Essential hypertension affects 25% of the US adult population and is a leading contributor to morbidity and mortality. Because BP is a multifactorial phenotype that resists simple genetic analysis, intermediate phenotypes within the complex network of BP regulatory systems may be more accessible to genetic dissection. The Renin-Angiotensin System (RAS) is known to influence intermediate and long-term blood pressure regulation through alterations in vascular tone and renal sodium and fluid resorption. This dissertation examines associations between renin (REN), angiotensinogen (AGT), angiotensin-converting enzyme (ACE) and angiotensin II type 1 receptor (AT1) gene variation and interindividual differences in plasma hormone levels, renal hemodynamics, and BP homeostasis.^ Methods. A total of 150 unrelated men and 150 unrelated women, between 20.0 and 49.9 years of age and free of acute or chronic illness except for a history of hypertension (11 men and 7 women, all off medications), were studied after one week on a controlled sodium diet. RAS plasma hormone levels, renal hemodynamics and BP were determined prior to and during angiotensin II (Ang II) infusion. Individuals were genotyped by PCR for a variable number tandem repeat (VNTR) polymorphism in REN, and for the following restriction fragment length polymorphisms (RFLP): AGT M235T, ACE I/D, and AT1 A1166C. Associations between clinical measurements and allelic variation were examined using multiple linear regression statistical models.^ Results. Women homozygous for the AT1 1166C allele demonstrated higher intracellular levels of sodium (p = 0.044). Men homozygous for the AGT T235 allele demonstrated a blunted decrement in renal plasma flow in response to Ang II infusion (p = 0.0002). There were no significant associations between RAS gene variation and interindividual variation in RAS plasma hormone levels or BP.^ Conclusions. Rather than identifying new BP controlling genes or alleles, the study paradigm employed in this thesis (i.e., measured genes, controlled environments and interventions) may provide mechanistic insight into how candidate genes affect BP homeostasis. ^
Resumo:
We address under what conditions a magma generated by partial melting at 100 km depth in the mantle wedge above a subduction zone can reach the crust in dikes before stalling. We also address under what conditions primitive basaltic magma (Mg # >60) can be delivered from this depth to the crust. We employ linear elastic fracture mechanics with magma solidification theory and perform a parametric sensitivity analysis. All dikes are initiated at a depth of 100 km in the thermal core of the wedge, and the Moho is fixed at 35 km depth. We consider a range of melt solidus temperatures (800-1100 degrees C), viscosities (10-100 Pa s), and densities (2400-2700 kg m(-3)). We also consider a range of host rock fracture toughness values (50-300 MPa m(1/2)) and dike lengths (2-5 km) and two thermal structures for the mantle wedge (1260 and 1400 degrees C at 100 km depth and 760 and 900 degrees C at 35 km depth). For the given parameter space, many dikes can reach the Moho in less than a few hundred hours, well within the time constraints provided by U series isotope disequilibria studies. Increasing the temperature in the mantle wedge, or increasing the dike length, allows additional dikes to propagate to the Moho. We conclude that some dikes with vertical lengths near their critical lengths and relatively high solidus temperatures will stall in the mantle before reaching the Moho, and these may be returned by corner flow to depths where they can melt under hydrous conditions. Thus, a chemical signature in arc lavas suggesting partial melting of slab basalts may be partly influenced by these recycled dikes. Alternatively, dikes with lengths well above their critical lengths can easily deliver primitive magmas to the crust, particularly if the mantle wedge is relatively hot. Dike transport remains a viable primary mechanism of magma ascent in convergent tectonic settings, but the potential for less rapid mechanisms making an important contribution increases as the mantle temperature at the Moho approaches the solidus temperature of the magma.