967 resultados para Optimal frame-level timing estimator
Resumo:
The objective of this study was to determine the optimal time interval for a repeated Chlamydia trachomatis (chlamydia) test.
Resumo:
Decompressive craniectomy (DC) due to intractably elevated intracranial pressure mandates later cranioplasty (CP). However, the optimal timing of CP remains controversial. We therefore analyzed our prospectively conducted database concerning the timing of CP and associated post-operative complications. From October 1999 to August 2011, 280 cranioplasty procedures were performed at the authors' institution. Patients were stratified into two groups according to the time from DC to cranioplasty (early, ≤2 months, and late, >2 months). Patient characteristics, timing of CP, and CP-related complications were analyzed. Overall CP was performed early in 19% and late in 81%. The overall complication rate was 16.4%. Complications after CP included epidural or subdural hematoma (6%), wound healing disturbance (5.7%), abscess (1.4%), hygroma (1.1%), cerebrospinal fluid fistula (1.1%), and other (1.1%). Patients who underwent early CP suffered significantly more often from complications compared to patients who underwent late CP (25.9% versus 14.2%; p=0.04). Patients with ventriculoperitoneal (VP) shunt had a significantly higher rate of complications after CP compared to patients without VP shunt (p=0.007). On multivariate analysis, early CP, the presence of a VP shunt, and intracerebral hemorrhage as underlying pathology for DC, were significant predictors of post-operative complications after CP. We provide detailed data on surgical timing and complications for cranioplasty after DC. The present data suggest that patients who undergo late CP might benefit from a lower complication rate. This might influence future surgical decision making regarding optimal timing of cranioplasty.
Resumo:
This paper presents a fully Bayesian approach that simultaneously combines basic event and statistically independent higher event-level failure data in fault tree quantification. Such higher-level data could correspond to train, sub-system or system failure events. The full Bayesian approach also allows the highest-level data that are usually available for existing facilities to be automatically propagated to lower levels. A simple example illustrates the proposed approach. The optimal allocation of resources for collecting additional data from a choice of different level events is also presented. The optimization is achieved using a genetic algorithm.
Resumo:
BACKGROUND: The deletion of three adjacent nucleotides in an exon may cause the lack of a single amino acid, while the protein sequence remains otherwise unchanged. Only one such in-frame deletion is known in the two RH genes, represented by the RHCE allele ceBP expressing a "very weak e antigen." STUDY DESIGN AND METHODS: Blood donor samples were recognized because of discrepant results of D phenotyping. Six samples came from Switzerland and one from Northern Germany. The molecular structures were determined by genomic DNA nucleotide sequencing of RHD. RESULTS: Two different variant D antigens were explained by RHD alleles harboring one in-frame triplet deletion each. Both single-amino-acid deletions led to partial D phenotypes with weak D antigen expression. Because of their D category V-like phenotypes, the RHD(Arg229del) allele was dubbed DVL-1 and the RHD(Lys235del) allele DVL-2. These in-frame triplet deletions are located in GAGAA or GAAGA repeats of the RHD exon 5. CONCLUSION: Partial D may be caused by a single-amino-acid deletion in RhD. The altered RhD protein segments in DVL types are adjacent to the extracellular loop 4, which constitutes one of the most immunogenic parts of the D antigen. These RhD protein segments are also altered in all DV, which may explain the similarity in phenotype. At the nucleotide level, the triplet deletions may have resulted from replication slippage. A total of nine amino acid positions in an Rhesus protein may be affected by this mechanism.
Resumo:
Early prenatal diagnosis and in utero therapy of certain fetal diseases have the potential to reduce fetal morbidity and mortality. The intrauterine transplantation of stem cells provides in some instances a therapeutic option before definitive organ failure occurs. Clinical experiences show that certain diseases, such as immune deficiencies or inborn errors of metabolism, can be successfully treated using stem cells derived from bone marrow. However, a remaining problem is the low level of engraftment that can be achieved. Efforts are made in animal models to optimise the graft and study the recipient's microenvironment to increase long-term engraftment levels. Our experiments in mice show similar early homing of allogeneic and xenogeneic stem cells and reasonable early engraftment of allogeneic murine fetal liver cells (17.1% donor cells in peripheral blood 4 weeks after transplantation), whereas xenogeneic HSC are rapidly diminished due to missing self-renewal and low differentiation capacities in the host's microenvironment. Allogeneic murine fetal liver cells have very good long-term engraftment (49.9% donor cells in peripheral blood 16 weeks after transplantation). Compared to the rodents, the sheep model has the advantage of body size and gestation comparable to the human fetus. Here, ultrasound-guided injection techniques significantly decreased fetal loss rates. In contrast to the murine in utero model, the repopulation capacities of allogeneic ovine fetal liver cells are lower (0.112% donor cells in peripheral blood 3 weeks after transplantation). The effect of MHC on engraftment levels seems to be marginal, since no differences could be observed between autologous and allogeneic transplantation (0.117% donor cells vs 0.112% donor cells in peripheral blood 1 to 2 weeks after transplantation). Further research is needed to study optimal timing and graft composition as well as immunological aspects of in utero transplantation.
Resumo:
Under a two-level hierarchical model, suppose that the distribution of the random parameter is known or can be estimated well. Data are generated via a fixed, but unobservable realization of this parameter. In this paper, we derive the smallest confidence region of the random parameter under a joint Bayesian/frequentist paradigm. On average this optimal region can be much smaller than the corresponding Bayesian highest posterior density region. The new estimation procedure is appealing when one deals with data generated under a highly parallel structure, for example, data from a trial with a large number of clinical centers involved or genome-wide gene-expession data for estimating individual gene- or center-specific parameters simultaneously. The new proposal is illustrated with a typical microarray data set and its performance is examined via a small simulation study.
Resumo:
Adaptation does not necessarily lead to traits which are optimal for the population. This is because selection is often the strongest at the individual or gene level. The evolution of selfishness can lead to a 'tragedy of the commons', where traits such as aggression or social cheating reduce population size and may lead to extinction. This suggests that species-level selection will result whenever species differ in the incentive to be selfish. We explore this idea in a simple model that combines individual-level selection with ecology in two interacting species. Our model is not influenced by kin or trait-group selection. We find that individual selection in combination with competitive exclusion greatly increases the likelihood that selfish species go extinct. A simple example of this would be a vertebrate species that invests heavily into squabbles over breeding sites, which is then excluded by a species that invests more into direct reproduction. A multispecies simulation shows that these extinctions result in communities containing species that are much less selfish. Our results suggest that species-level selection and community dynamics play an important role in regulating the intensity of conflicts in natural populations.
Resumo:
OBJECTIVE: To obtain precise information on the optimal time window for surgical antimicrobial prophylaxis. SUMMARY BACKGROUND DATA: Although perioperative antimicrobial prophylaxis is a well-established strategy for reducing the risk of surgical site infections (SSI), the optimal timing for this procedure has yet to be precisely determined. Under today's recommendations, antibiotics may be administered within the final 2 hours before skin incision, ideally as close to incision time as possible. METHODS: In this prospective observational cohort study at Basel University Hospital we analyzed the incidence of SSI by the timing of antimicrobial prophylaxis in a consecutive series of 3836 surgical procedures. Surgical wounds and resulting infections were assessed to Centers for Disease Control and Prevention standards. Antimicrobial prophylaxis consisted in single-shot administration of 1.5 g of cefuroxime (plus 500 mg of metronidazole in colorectal surgery). RESULTS: The overall SSI rate was 4.7% (180 of 3836). In 49% of all procedures antimicrobial prophylaxis was administered within the final half hour. Multivariable logistic regression analyses showed a significant increase in the odds of SSI when antimicrobial prophylaxis was administered less than 30 minutes (crude odds ratio = 2.01; adjusted odds ratio = 1.95; 95% confidence interval, 1.4-2.8; P < 0.001) and 120 to 60 minutes (crude odds ratio = 1.75; adjusted odds ratio = 1.74; 95% confidence interval, 1.0-2.9; P = 0.035) as compared with the reference interval of 59 to 30 minutes before incision. CONCLUSIONS: When cefuroxime is used as a prophylactic antibiotic, administration 59 to 30 minutes before incision is more effective than administration during the last half hour.
Resumo:
This dissertation presents the competitive control methodologies for small-scale power system (SSPS). A SSPS is a collection of sources and loads that shares a common network which can be isolated during terrestrial disturbances. Micro-grids, naval ship electric power systems (NSEPS), aircraft power systems and telecommunication system power systems are typical examples of SSPS. The analysis and development of control systems for small-scale power systems (SSPS) lacks a defined slack bus. In addition, a change of a load or source will influence the real time system parameters of the system. Therefore, the control system should provide the required flexibility, to ensure operation as a single aggregated system. In most of the cases of a SSPS the sources and loads must be equipped with power electronic interfaces which can be modeled as a dynamic controllable quantity. The mathematical formulation of the micro-grid is carried out with the help of game theory, optimal control and fundamental theory of electrical power systems. Then the micro-grid can be viewed as a dynamical multi-objective optimization problem with nonlinear objectives and variables. Basically detailed analysis was done with optimal solutions with regards to start up transient modeling, bus selection modeling and level of communication within the micro-grids. In each approach a detail mathematical model is formed to observe the system response. The differential game theoretic approach was also used for modeling and optimization of startup transients. The startup transient controller was implemented with open loop, PI and feedback control methodologies. Then the hardware implementation was carried out to validate the theoretical results. The proposed game theoretic controller shows higher performances over traditional the PI controller during startup. In addition, the optimal transient surface is necessary while implementing the feedback controller for startup transient. Further, the experimental results are in agreement with the theoretical simulation. The bus selection and team communication was modeled with discrete and continuous game theory models. Although players have multiple choices, this controller is capable of choosing the optimum bus. Next the team communication structures are able to optimize the players’ Nash equilibrium point. All mathematical models are based on the local information of the load or source. As a result, these models are the keys to developing accurate distributed controllers.
Resumo:
The objective of this report is to study distributed (decentralized) three phase optimal power flow (OPF) problem in unbalanced power distribution networks. A full three phase representation of the distribution networks is considered to account for the highly unbalance state of the distribution networks. All distribution network’s series/shunt components, and load types/combinations had been modeled on commercial version of General Algebraic Modeling System (GAMS), the high-level modeling system for mathematical programming and optimization. The OPF problem has been successfully implemented and solved in a centralized approach and distributed approach, where the objective is to minimize the active power losses in the entire system. The study was implemented on the IEEE-37 Node Test Feeder. A detailed discussion of all problem sides and aspects starting from the basics has been provided in this study. Full simulation results have been provided at the end of the report.
DESIGN AND IMPLEMENT DYNAMIC PROGRAMMING BASED DISCRETE POWER LEVEL SMART HOME SCHEDULING USING FPGA
Resumo:
With the development and capabilities of the Smart Home system, people today are entering an era in which household appliances are no longer just controlled by people, but also operated by a Smart System. This results in a more efficient, convenient, comfortable, and environmentally friendly living environment. A critical part of the Smart Home system is Home Automation, which means that there is a Micro-Controller Unit (MCU) to control all the household appliances and schedule their operating times. This reduces electricity bills by shifting amounts of power consumption from the on-peak hour consumption to the off-peak hour consumption, in terms of different “hour price”. In this paper, we propose an algorithm for scheduling multi-user power consumption and implement it on an FPGA board, using it as the MCU. This algorithm for discrete power level tasks scheduling is based on dynamic programming, which could find a scheduling solution close to the optimal one. We chose FPGA as our system’s controller because FPGA has low complexity, parallel processing capability, a large amount of I/O interface for further development and is programmable on both software and hardware. In conclusion, it costs little time running on FPGA board and the solution obtained is good enough for the consumers.
Resumo:
Experimental work and analysis was done to investigate engine startup robustness and emissions of a flex-fuel spark ignition (SI) direct injection (DI) engine. The vaporization and other characteristics of ethanol fuel blends present a challenge at engine startup. Strategies to reduce the enrichment requirements for the first engine startup cycle and emissions for the second and third fired cycle at 25°C ± 1°C engine and intake air temperature were investigated. Research work was conducted on a single cylinder SIDI engine with gasoline and E85 fuels, to study the effect on first fired cycle of engine startup. Piston configurations that included a compression ratio change (11 vs 15.5) and piston geometry change (flattop vs bowl) were tested, along with changes in intake cam timing (95,110,125) and fuel pressure (0.4 MPa vs 3 MPa). The goal was to replicate the engine speed, manifold pressure, fuel pressure and testing temperature from an engine startup trace for investigating the first fired cycle for the engine. Results showed bowl piston was able to enable lower equivalence ratio engine starts with gasoline fuel, while also showing lower IMEP at the same equivalence ratio compared to flat top piston. With E85, bowl piston showed reduced IMEP as compression ratio increased at the same equivalence ratio. A preference for constant intake valve timing across fuels seemed to indicate that flattop piston might be a good flex-fuel piston. Significant improvements were seen with higher CR bowl piston with high fuel pressure starts, but showed no improvement with low fuel pressures. Simulation work was conducted to analyze initial three cycles of engine startup in GT-POWER for the same set of hardware used in the experimentations. A steady state validated model was modified for startup conditions. The results of which allowed an understanding of the relative residual levels and IMEP at the test points in the cam phasing space. This allowed selecting additional test points that enable use of higher residual levels, eliminating those with smaller trapped mass incapable of producing required IMEP for proper engine turnover. The second phase of experimental testing results for 2nd and 3rd startup cycle revealed both E10 and E85 prefer the same SOI of 240°bTDC at second and third startup cycle for the flat top piston and high injection pressures. E85 fuel optimal cam timing for startup showed that it tolerates more residuals compared to E10 fuel. Higher internal residuals drives down the Ø requirement for both fuels up to their combustion stability limit, this is thought to be direct benefit to vaporization due to increased cycle start temperature. Benefits are shown for an advance IMOP and retarded EMOP strategy at engine startup. Overall the amount of residuals preferred by an engine for E10 fuel at startup is thought to be constant across engine speed, thus could enable easier selection of optimized cam positions across the startup speeds.
Resumo:
For a microgrid with a high penetration level of renewable energy, energy storage use becomes more integral to the system performance due to the stochastic nature of most renewable energy sources. This thesis examines the use of droop control of an energy storage source in dc microgrids in order to optimize a global cost function. The approach involves using a multidimensional surface to determine the optimal droop parameters based on load and state of charge. The optimal surface is determined using knowledge of the system architecture and can be implemented with fully decentralized source controllers. The optimal surface control of the system is presented. Derivations of a cost function along with the implementation of the optimal control are included. Results were verified using a hardware-in-the-loop system.
Resumo:
OBJECTIVE: In search of an optimal compression therapy for venous leg ulcers, a systematic review and meta-analysis was performed of randomized controlled trials (RCT) comparing compression systems based on stockings (MCS) with divers bandages. METHODS: RCT were retrieved from six sources and reviewed independently. The primary endpoint, completion of healing within a defined time frame, and the secondary endpoints, time to healing, and pain were entered into a meta-analysis using the tools of the Cochrane Collaboration. Additional subjective endpoints were summarized. RESULTS: Eight RCT (published 1985-2008) fulfilled the predefined criteria. Data presentation was adequate and showed moderate heterogeneity. The studies included 692 patients (21-178/study, mean age 61 years, 56% women). Analyzed were 688 ulcerated legs, present for 1 week to 9 years, sizing 1 to 210 cm(2). The observation period ranged from 12 to 78 weeks. Patient and ulcer characteristics were evenly distributed in three studies, favored the stocking groups in four, and the bandage group in one. Data on the pressure exerted by stockings and bandages were reported in seven and two studies, amounting to 31-56 and 27-49 mm Hg, respectively. The proportion of ulcers healed was greater with stockings than with bandages (62.7% vs 46.6%; P < .00001). The average time to healing (seven studies, 535 patients) was 3 weeks shorter with stockings (P = .0002). In no study performed bandages better than MCS. Pain was assessed in three studies (219 patients) revealing an important advantage of stockings (P < .0001). Other subjective parameters and issues of nursing revealed an advantage of MCS as well. CONCLUSIONS: Leg compression with stockings is clearly better than compression with bandages, has a positive impact on pain, and is easier to use.
Resumo:
Audio-visual documents obtained from German TV news are classified according to the IPTC topic categorization scheme. To this end usual text classification techniques are adapted to speech, video, and non-speech audio. For each of the three modalities word analogues are generated: sequences of syllables for speech, “video words” based on low level color features (color moments, color correlogram and color wavelet), and “audio words” based on low-level spectral features (spectral envelope and spectral flatness) for non-speech audio. Such audio and video words provide a means to represent the different modalities in a uniform way. The frequencies of the word analogues represent audio-visual documents: the standard bag-of-words approach. Support vector machines are used for supervised classification in a 1 vs. n setting. Classification based on speech outperforms all other single modalities. Combining speech with non-speech audio improves classification. Classification is further improved by supplementing speech and non-speech audio with video words. Optimal F-scores range between 62% and 94% corresponding to 50% - 84% above chance. The optimal combination of modalities depends on the category to be recognized. The construction of audio and video words from low-level features provide a good basis for the integration of speech, non-speech audio and video.