812 resultados para Design approach


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Percutaneous nephrolithotomy (PCNL) for the treatment of renal stones and other related renal diseases has proved its efficacy and has stood the test of time compared with open surgical methods and extracorporal shock wave lithotripsy. However, access to the collecting system of the kidney is not easy because the available intra-operative image modalities only provide a two dimensional view of the surgical scenario. With this lack of visual information, several punctures are often necessary which, increases the risk of renal bleeding, splanchnic, vascular or pulmonary injury, or damage to the collecting system which sometimes makes the continuation of the procedure impossible. In order to address this problem, this paper proposes a workflow for introduction of a stereotactic needle guidance system for PCNL procedures. An analysis of the imposed clinical requirements, and a instrument guidance approach to provide the physician with a more intuitive planning and visual guidance to access the collecting system of the kidney are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Breast cancer is the most common cancer among women. Tamoxifen is the preferred drug for estrogen receptor-positive breast cancer treatment, yet many of these cancers are intrinsically resistant to tamoxifen or acquire resistance during treatment. Therefore, scientists are searching for breast cancer drugs that have different molecular targets. Methodology: Recently, a computational approach was used to successfully design peptides that are new lead compounds against breast cancer. We used replica exchange molecular dynamics to predict the structure and dynamics of active peptides, leading to the discovery of smaller bioactive peptides. Conclusions: These analogs inhibit estrogen-dependent cell growth in a mouse uterine growth assay, a test showing reliable correlation with human breast cancer inhibition. We outline the computational methods that were tried and used along with the experimental information that led to the successful completion of this research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Switzerland implemented a risk-based monitoring of Swiss dairy products in 2002 based on a risk assessment (RA) that considered the probability of exceeding a microbiological limit value set by law. A new RA was launched in 2007 to review and further develop the previous assessment, and to make recommendations for future risk-based monitoring according to current risks. The resulting qualitative RA was designed to ascertain the risk to human health from the consumption of Swiss dairy products. The products and microbial hazards to be considered in the RA were determined based on a risk profile. The hazards included Campylobacter spp., Listeria monocytogenes, Salmonella spp., Shiga toxin-producing Escherichia coli, coagulase-positive staphylococci and Staphylococcus aureus enterotoxin. The release assessment considered the prevalence of the hazards in bulk milk samples, the influence of the process parameters on the microorganisms, and the influence of the type of dairy. The exposure assessment was linked to the production volume. An overall probability was estimated combining the probabilities of release and exposure for each combination of hazard, dairy product and type of dairy. This overall probability represents the likelihood of a product from a certain type of dairy exceeding the microbiological limit value and being passed on to the consumer. The consequences could not be fully assessed due to lack of detailed information on the number of disease cases caused by the consumption of dairy products. The results were expressed as a ranking of overall probabilities. Finally, recommendations for the design of the risk-based monitoring programme and for filling the identified data gaps were given. The aims of this work were (i) to present the qualitative RA approach for Swiss dairy products, which could be adapted to other settings and (ii) to discuss the opportunities and limitations of the qualitative method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Utilization of biogas can provide a source of renewable energy in both heat and power generation. Combustion of biogas in land-based gas turbines for power generation is a promising approach to reducing greenhouse gases and US dependence on foreign-source fossil fuels. Biogas is a byproduct from the decomposition of organic matter and consists primarily of CH4 and large amounts of CO2. The focus of this research was to design a combustion device and investigate the effects of increasing levels of CO2 addition to the combustion of pure CH4 with air. Using an atmospheric-pressure, swirl-stabilized dump combustor, emissions data and flame stability limitations were measured and analyzed. In particular, CO2, CO, and NOx emissions were the main focus of the combustion products. Additionally, the occurrence of lean blowout and combustion pressure oscillations, which impose significant limitations in operation ranges for actual gas turbines, was observed. Preliminary kinetic and equilibrium modeling was performed using Cantera and CEA for the CH4/CO2/Air combustion systems to analyze the effect of CO2 upon adiabatic flame temperature and emission levels. The numerical and experimental results show similar dependence of emissions on equivalence ratio, CO2 addition, inlet air temperature, and combustor residence time. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years sphingolipids have emerged as important signaling molecules regulating fundamental cell responses such as cell death and differentiation, proliferation and aspects of inflammation. Especially ceramide has been a main focus of research since it possesses pro-apoptotic capacity in many cell types. A counterplayer of ceramide was found in sphingosine-1-phosphate (S1P), which is generated from ceramide by the consecutive actions of ceramidase and sphingosine kinase. S1P can potently induce cell proliferation via binding to and activation of the Edg family of receptors which have now been renamed as S1P receptors. Obviously, a delicate balance between ceramide and sphingosine-1-phosphate determines whether cells undergo apoptosis or proliferate, two cell responses that are critically involved in tumor development. Directing the balance in favor of ceramide, i.e. by inhibiting ceramidase or sphingosine kinase activities may support the pro-apoptotic action of ceramide and thus may have beneficial effects in cancer therapy. This review will summarize novel insights into the regulation of sphingolipid formation and their potential involvement in tumor development. Finally, we will pinpoint potential new targets for tumor therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During general anesthesia drugs are administered to provide hypnosis, ensure analgesia, and skeletal muscle relaxation. In this paper, the main components of a newly developed controller for skeletal muscle relaxation are described. Muscle relaxation is controlled by administration of neuromuscular blocking agents. The degree of relaxation is assessed by supramaximal train-of-four stimulation of the ulnar nerve and measuring the electromyogram response of the adductor pollicis muscle. For closed-loop control purposes, a physiologically based pharmacokinetic and pharmacodynamic model of the neuromuscular blocking agent mivacurium is derived. The model is used to design an observer-based state feedback controller. Contrary to similar automatic systems described in the literature this controller makes use of two different measures obtained in the train-of-four measurement to maintain the desired level of relaxation. The controller is validated in a clinical study comparing the performance of the controller to the performance of the anesthesiologist. As presented, the controller was able to maintain a preselected degree of muscle relaxation with excellent precision while minimizing drug administration. The controller performed at least equally well as the anesthesiologist.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To describe the advantages and surgical technique of a trochanteric flip osteotomy in combination with a Kocher-Langenbeck approach for the treatment of selected acetabular fractures. DESIGN: Consecutive series, teaching hospital. METHODS: Through mobilization of the vastus lateralis muscle, a slice of the greater trochanter with the attached gluteus medius muscle can be flipped anteriorly. The gluteus minimus muscle can then be easily mobilized, giving free access to the posterosuperior and superior acetabular wall area. Damage to the abductor muscles by vigorous retraction can be avoided, potentially resulting in less ectopic ossification. Ten consecutive cases of acetabular fractures treated with this approach are reported. In eight cases, an anatomic reduction was achieved; in the remaining two cases with severe comminution, the reduction was within one to three millimeters. The trochanteric fragment was fixed with two 3.5-millimeter cortical screws. RESULTS: All osteotomies healed in anatomic position within six to eight weeks postoperatively. Abductor strength was symmetric in eight patients and mildly reduced in two patients. Heterotopic ossification was limited to Brooker classes 1 and 2 without functional impairment at an average follow-up of twenty months. No femoral head necrosis was observed. CONCLUSION: This technique allows better visualization, more accurate reduction, and easier fixation of cranial acetabular fragments. Cranial migration of the greater trochanter after fixation with two screws is unlikely to occur because of the distal pull of the vastus lateralis muscle, balancing the cranial pull of the gluteus medius muscle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Medical errors originating in health care facilities are a significant source of preventable morbidity, mortality, and healthcare costs. Voluntary error report systems that collect information on the causes and contributing factors of medi- cal errors regardless of the resulting harm may be useful for developing effective harm prevention strategies. Some patient safety experts question the utility of data from errors that did not lead to harm to the patient, also called near misses. A near miss (a.k.a. close call) is an unplanned event that did not result in injury to the patient. Only a fortunate break in the chain of events prevented injury. We use data from a large voluntary reporting system of 836,174 medication errors from 1999 to 2005 to provide evidence that the causes and contributing factors of errors that result in harm are similar to the causes and contributing factors of near misses. We develop Bayesian hierarchical models for estimating the log odds of selecting a given cause (or contributing factor) of error given harm has occurred and the log odds of selecting the same cause given that harm did not occur. The posterior distribution of the correlation between these two vectors of log-odds is used as a measure of the evidence supporting the use of data from near misses and their causes and contributing factors to prevent medical errors. In addition, we identify the causes and contributing factors that have the highest or lowest log-odds ratio of harm versus no harm. These causes and contributing factors should also be a focus in the design of prevention strategies. This paper provides important evidence on the utility of data from near misses, which constitute the vast majority of errors in our data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Arabidopsis thaliana has emerged as a leading model species in plant genetics and functional genomics including research on the genetic causes of heterosis. We applied a triple testcross (TTC) design and a novel biometrical approach to identify and characterize quantitative trait loci (QTL) for heterosis of five biomass-related traits by (i) estimating the number, genomic positions, and genetic effects of heterotic QTL, (ii) characterizing their mode of gene action, and (iii) testing for presence of epistatic effects by a genomewide scan and marker x marker interactions. In total, 234 recombinant inbred lines (RILs) of Arabidopsis hybrid C24 x Col-0 were crossed to both parental lines and their F1 and analyzed with 110 single-nucleotide polymorphism (SNP) markers. QTL analyses were conducted using linear transformations Z1, Z2, and Z3 calculated from the adjusted entry means of TTC progenies. With Z1, we detected 12 QTL displaying augmented additive effects. With Z2, we mapped six QTL for augmented dominance effects. A one-dimensional genome scan with Z3 revealed two genomic regions with significantly negative dominance x additive epistatic effects. Two-way analyses of variance between marker pairs revealed nine digenic epistatic interactions: six reflecting dominance x dominance effects with variable sign and three reflecting additive x additive effects with positive sign. We conclude that heterosis for biomass-related traits in Arabidopsis has a polygenic basis with overdominance and/or epistasis being presumably the main types of gene action.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heterosis is widely used in breeding, but the genetic basis of this biological phenomenon has not been elucidated. We postulate that additive and dominance genetic effects as well as two-locus interactions estimated in classical QTL analyses are not sufficient for quantifying the contributions of QTL to heterosis. A general theoretical framework for determining the contributions of different types of genetic effects to heterosis was developed. Additive x additive epistatic interactions of individual loci with the entire genetic background were identified as a major component of midparent heterosis. On the basis of these findings we defined a new type of heterotic effect denoted as augmented dominance effect di* that comprises the dominance effect at each QTL minus half the sum of additive x additive interactions with all other QTL. We demonstrate that genotypic expectations of QTL effects obtained from analyses with the design III using testcrosses of recombinant inbred lines and composite-interval mapping precisely equal genotypic expectations of midparent heterosis, thus identifying genomic regions relevant for expression of heterosis. The theory for QTL mapping of multiple traits is extended to the simultaneous mapping of newly defined genetic effects to improve the power of QTL detection and distinguish between dominance and overdominance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the past sixty years, waveguide slot radiator arrays have played a critical role in microwave radar and communication systems. They feature a well-characterized antenna element capable of direct integration into a low-loss feed structure with highly developed and inexpensive manufacturing processes. Waveguide slot radiators comprise some of the highest performance—in terms of side-lobe-level, efficiency, etc. — antenna arrays ever constructed. A wealth of information is available in the open literature regarding design procedures for linearly polarized waveguide slots. By contrast, despite their presence in some of the earliest published reports, little has been presented to date on array designs for circularly polarized (CP) waveguide slots. Moreover, that which has been presented features a classic traveling wave, efficiency-reducing beam tilt. This work proposes a unique CP waveguide slot architecture which mitigates these problems and a thorough design procedure employing widely available, modern computational tools. The proposed array topology features simultaneous dual-CP operation with grating-lobe-free, broadside radiation, high aperture efficiency, and good return loss. A traditional X-Slot CP element is employed with the inclusion of a slow wave structure passive phase shifter to ensure broadside radiation without the need for performance-limiting dielectric loading. It is anticipated this technology will be advantageous for upcoming polarimetric radar and Ka-band SatCom systems. The presented design methodology represents a philosophical shift away from traditional waveguide slot radiator design practices. Rather than providing design curves and/or analytical expressions for equivalent circuit models, simple first-order design rules – generated via parametric studies — are presented with the understanding that device optimization and design will be carried out computationally. A unit-cell, S-parameter based approach provides a sufficient reduction of complexity to permit efficient, accurate device design with attention to realistic, application-specific mechanical tolerances. A transparent, start-to-finish example of the design procedure for a linear sub-array at X-Band is presented. Both unit cell and array performance is calculated via finite element method simulations. Results are confirmed via good agreement with finite difference, time domain calculations. Array performance exhibiting grating-lobe-free, broadside-scanned, dual-CP radiation with better than 20 dB return loss and over 75% aperture efficiency is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hepatitis C virus (HCV) vaccine efficacy may crucially depend on immunogen length and coverage of viral sequence diversity. However, covering a considerable proportion of the circulating viral sequence variants would likely require long immunogens, which for the conserved portions of the viral genome, would contain unnecessarily redundant sequence information. In this study, we present the design and in vitro performance analysis of a novel "epitome" approach that compresses frequent immune targets of the cellular immune response against HCV into a shorter immunogen sequence. Compression of immunological information is achieved by partial overlapping shared sequence motifs between individual epitopes. At the same time, sequence diversity coverage is provided by taking advantage of emerging cross-reactivity patterns among epitope variants so that epitope variants associated with the broadest variant cross-recognition are preferentially included. The processing and presentation analysis of specific epitopes included in such a compressed, in vitro-expressed HCV epitome indicated effective processing of a majority of tested epitopes, although re-presentation of some epitopes may require refined sequence design. Together, the present study establishes the epitome approach as a potential powerful tool for vaccine immunogen design, especially suitable for the induction of cellular immune responses against highly variable pathogens.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Renewable energy is growing in demand, and thus the the manufacture of solar cells and photovoltaic arrays has advanced dramatically in recent years. This is proved by the fact that the photovoltaic production has doubled every 2 years, increasing by an average of 48% each year since 2002. Covering the general overview of solar cell working, and its model, this thesis will start with the three generations of photovoltaic solar cell technology, and move to the motivation of dedicating research to nanostructured solar cell. For the current generation solar cells, among several factors, like photon capture, photon reflection, carrier generation by photons, carrier transport and collection, the efficiency also depends on the absorption of photons. The absorption coefficient,α, and its dependence on the wavelength, λ, is of major concern to improve the efficiency. Nano-silicon structures (quantum wells and quantum dots) have a unique advantage compared to bulk and thin film crystalline silicon that multiple direct and indirect band gaps can be realized by appropriate size control of the quantum wells. This enables multiple wavelength photons of the solar spectrum to be absorbed efficiently. There is limited research on the calculation of absorption coefficient in nano structures of silicon. We present a theoretical approach to calculate the absorption coefficient using quantum mechanical calculations on the interaction of photons with the electrons of the valence band. One model is that the oscillator strength of the direct optical transitions is enhanced by the quantumconfinement effect in Si nanocrystallites. These kinds of quantum wells can be realized in practice in porous silicon. The absorption coefficient shows a peak of 64638.2 cm-1 at = 343 nm at photon energy of ξ = 3.49 eV ( = 355.532 nm). I have shown that a large value of absorption coefficient α comparable to that of bulk silicon is possible in silicon QDs because of carrier confinement. Our results have shown that we can enhance the absorption coefficient by an order of 10, and at the same time a nearly constant absorption coefficient curve over the visible spectrum. The validity of plots is verified by the correlation with experimental photoluminescence plots. A very generic comparison for the efficiency of p-i-n junction solar cell is given for a cell incorporating QDs and sans QDs. The design and fabrication technique is discussed in brief. I have shown that by using QDs in the intrinsic region of a cell, we can improve the efficiency by a factor of 1.865 times. Thus for a solar cell of efficiency of 26% for first generation solar cell, we can improve the efficiency to nearly 48.5% on using QDs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To mitigate greenhouse gas (GHG) emissions and reduce U.S. dependence on imported oil, the United States (U.S.) is pursuing several options to create biofuels from renewable woody biomass (hereafter referred to as “biomass”). Because of the distributed nature of biomass feedstock, the cost and complexity of biomass recovery operations has significant challenges that hinder increased biomass utilization for energy production. To facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization and tapping unused forest residues, it is proposed to develop biofuel supply chain models based on optimization and simulation approaches. The biofuel supply chain is structured around four components: biofuel facility locations and sizes, biomass harvesting/forwarding, transportation, and storage. A Geographic Information System (GIS) based approach is proposed as a first step for selecting potential facility locations for biofuel production from forest biomass based on a set of evaluation criteria, such as accessibility to biomass, railway/road transportation network, water body and workforce. The development of optimization and simulation models is also proposed. The results of the models will be used to determine (1) the number, location, and size of the biofuel facilities, and (2) the amounts of biomass to be transported between the harvesting areas and the biofuel facilities over a 20-year timeframe. The multi-criteria objective is to minimize the weighted sum of the delivered feedstock cost, energy consumption, and GHG emissions simultaneously. Finally, a series of sensitivity analyses will be conducted to identify the sensitivity of the decisions, such as the optimal site selected for the biofuel facility, to changes in influential parameters, such as biomass availability and transportation fuel price. Intellectual Merit The proposed research will facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization in the renewable biofuel industry. The GIS-based facility location analysis considers a series of factors which have not been considered simultaneously in previous research. Location analysis is critical to the financial success of producing biofuel. The modeling of woody biomass supply chains using both optimization and simulation, combing with the GIS-based approach as a precursor, have not been done to date. The optimization and simulation models can help to ensure the economic and environmental viability and sustainability of the entire biofuel supply chain at both the strategic design level and the operational planning level. Broader Impacts The proposed models for biorefineries can be applied to other types of manufacturing or processing operations using biomass. This is because the biomass feedstock supply chain is similar, if not the same, for biorefineries, biomass fired or co-fired power plants, or torrefaction/pelletization operations. Additionally, the research results of this research will continue to be disseminated internationally through publications in journals, such as Biomass and Bioenergy, and Renewable Energy, and presentations at conferences, such as the 2011 Industrial Engineering Research Conference. For example, part of the research work related to biofuel facility identification has been published: Zhang, Johnson and Sutherland [2011] (see Appendix A). There will also be opportunities for the Michigan Tech campus community to learn about the research through the Sustainable Future Institute.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research initiative was triggered by the problems of water management of Polymer Electrolyte Membrane Fuel Cell (PEMFC). In low temperature fuel cells such as PEMFC, some of the water produced after the chemical reaction remains in its liquid state. Excess water produced by the fuel cell must be removed from the system to avoid flooding of the gas diffusion layers (GDL). The GDL is responsible for the transport of reactant gas to the active sites and remove the water produced from the sites. If the GDL is flooded, the supply gas will not be able to reach the reactive sites and the fuel cell fails. The choice of water removal method in this research is to exert a variable asymmetrical force on a liquid droplet. As the drop of liquid is subjected to an external vibrational force in the form of periodic wave, it will begin to oscillate. A fluidic oscillator is capable to produce a pulsating flow using simple balance of momentum fluxes between three impinging jets. By connecting the outputs of the oscillator to the gas channels of a fuel cell, a flow pulsation can be imposed on a water droplet formed within the gas channel during fuel cell operation. The lowest frequency produced by this design is approximately 202 Hz when a 20 inches feed-back port length was used and a supply pressure of 5 psig was introduced. This information was found by setting up a fluidic network with appropriate data acquisition. The components include a fluidic amplifier, valves and fittings, flow meters, a pressure gage, NI-DAQ system, Siglab®, Matlab software and four PCB microphones. The operating environment of the water droplet was reviewed, speed of the sound pressure which travels down the square channel was precisely estimated, and measurement devices were carefully selected. Applicable alternative measurement devices and its application to pressure wave measurement was considered. Methods for experimental setup and possible approaches were recommended, with some discussion of potential problems with implementation of this technique. Some computational fluid dynamic was also performed as an approach to oscillator design.