934 resultados para Branch and bound algorithms
Resumo:
The role of gap junction channels on cardiac impulse propagation is complex. This review focuses on the differential expression of connexins in the heart and the biophysical properties of gap junction channels under normal and disease conditions. Structural determinants of impulse propagation have been gained from biochemical and immunocytochemical studies performed on tissue extracts and intact cardiac tissue. These have defined the distinctive connexin coexpression patterns and relative levels in different cardiac tissues. Functional determinants of impulse propagation have emerged from electrophysiological experiments carried out on cell pairs. The static properties (channel number and conductance) limit the current flow between adjacent cardiomyocytes and thus set the basic conduction velocity. The dynamic properties (voltage-sensitive gating and kinetics of channels) are responsible for a modulation of the conduction velocity during propagated action potentials. The effect is moderate and depends on the type of Cx and channel. For homomeric-homotypic channels, the influence is small to medium; for homomeric-heterotypic channels, it is medium to strong. Since no data are currently available on heteromeric channels, their influence on impulse propagation is speculative. The modulation by gap junction channels is most prominent in tissues at the boundaries between cardiac tissues such as sinoatrial node-atrial muscle, atrioventricular node-His bundle, His bundle-bundle branch and Purkinje fibers-ventricular muscle. The data predict facilitation of orthodromic propagation.
Resumo:
Terminal sialic acid residues on surface-associated glycoconjugates mediate host cell interactions of many pathogens. Addition of sialic acid-rich fetuin enhanced, and the presence of the sialidiase inhibitor 2-deoxy-2,3-dehydro-N-acetylneuraminic acid reduced, the physical interaction of Neospora caninum tachyzoites and bradyzoites with Vero cell monolayers. Thus, Neospora extracts were subjected to fetuin-agarose affinity chromatography in order to isolate components potentially interacting with sialic acid residues. SDS-PAGE and silver staining of the fetuin binding fraction revealed the presence of a single protein band of approximately 65 kDa, subsequently named NcFBP (Neospora caninum fetuin-binding protein), which was localized at the apical tip of the tachyzoites and was continuously released into the surrounding medium in a temperature-independent manner. NcFBP readily interacted with Vero cells and bound to chondroitin sulfate A and C, and anti-NcFBP antibodies interfered in tachyzoite adhesion to host cell monolayers. In additon, analysis of the fetuin binding fraction by gelatin substrate zymography was performed, and demonstrated the presence of two bands of 96 and 140 kDa exhibiting metalloprotease-activity. The metalloprotease activity readily degraded glycosylated proteins such as fetuin and bovine immunoglobulin G heavy chain, whereas non-glycosylated proteins such as bovine serum albumin and immunoglobulin G light chain were not affected. These findings suggest that the fetuin-binding fraction of Neospora caninum tachyzoites contains components that could be potentially involved in host-parasite interactions.
Resumo:
Single-screw extrusion is one of the widely used processing methods in plastics industry, which was the third largest manufacturing industry in the United States in 2007 [5]. In order to optimize the single-screw extrusion process, tremendous efforts have been devoted for development of accurate models in the last fifty years, especially for polymer melting in screw extruders. This has led to a good qualitative understanding of the melting process; however, quantitative predictions of melting from various models often have a large error in comparison to the experimental data. Thus, even nowadays, process parameters and the geometry of the extruder channel for the single-screw extrusion are determined by trial and error. Since new polymers are developed frequently, finding the optimum parameters to extrude these polymers by trial and error is costly and time consuming. In order to reduce the time and experimental work required for optimizing the process parameters and the geometry of the extruder channel for a given polymer, the main goal of this research was to perform a coordinated experimental and numerical investigation of melting in screw extrusion. In this work, a full three-dimensional finite element simulation of the two-phase flow in the melting and metering zones of a single-screw extruder was performed by solving the conservation equations for mass, momentum, and energy. The only attempt for such a three-dimensional simulation of melting in screw extruder was more than twenty years back. However, that work had only a limited success because of the capability of computers and mathematical algorithms available at that time. The dramatic improvement of computational power and mathematical knowledge now make it possible to run full 3-D simulations of two-phase flow in single-screw extruders on a desktop PC. In order to verify the numerical predictions from the full 3-D simulations of two-phase flow in single-screw extruders, a detailed experimental study was performed. This experimental study included Maddock screw-freezing experiments, Screw Simulator experiments and material characterization experiments. Maddock screw-freezing experiments were performed in order to visualize the melting profile along the single-screw extruder channel with different screw geometry configurations. These melting profiles were compared with the simulation results. Screw Simulator experiments were performed to collect the shear stress and melting flux data for various polymers. Cone and plate viscometer experiments were performed to obtain the shear viscosity data which is needed in the simulations. An optimization code was developed to optimize two screw geometry parameters, namely, screw lead (pitch) and depth in the metering section of a single-screw extruder, such that the output rate of the extruder was maximized without exceeding the maximum temperature value specified at the exit of the extruder. This optimization code used a mesh partitioning technique in order to obtain the flow domain. The simulations in this flow domain was performed using the code developed to simulate the two-phase flow in single-screw extruders.
Resumo:
This doctoral thesis presents the computational work and synthesis with experiments for internal (tube and channel geometries) as well as external (flow of a pure vapor over a horizontal plate) condensing flows. The computational work obtains accurate numerical simulations of the full two dimensional governing equations for steady and unsteady condensing flows in gravity/0g environments. This doctoral work investigates flow features, flow regimes, attainability issues, stability issues, and responses to boundary fluctuations for condensing flows in different flow situations. This research finds new features of unsteady solutions of condensing flows; reveals interesting differences in gravity and shear driven situations; and discovers novel boundary condition sensitivities of shear driven internal condensing flows. Synthesis of computational and experimental results presented here for gravity driven in-tube flows lays framework for the future two-phase component analysis in any thermal system. It is shown for both gravity and shear driven internal condensing flows that steady governing equations have unique solutions for given inlet pressure, given inlet vapor mass flow rate, and fixed cooling method for condensing surface. But unsteady equations of shear driven internal condensing flows can yield different “quasi-steady” solutions based on different specifications of exit pressure (equivalently exit mass flow rate) concurrent to the inlet pressure specification. This thesis presents a novel categorization of internal condensing flows based on their sensitivity to concurrently applied boundary (inlet and exit) conditions. The computational investigations of an external shear driven flow of vapor condensing over a horizontal plate show limits of applicability of the analytical solution. Simulations for this external condensing flow discuss its stability issues and throw light on flow regime transitions because of ever-present bottom wall vibrations. It is identified that laminar to turbulent transition for these flows can get affected by ever present bottom wall vibrations. Detailed investigations of dynamic stability analysis of this shear driven external condensing flow result in the introduction of a new variable, which characterizes the ratio of strength of the underlying stabilizing attractor to that of destabilizing vibrations. Besides development of CFD tools and computational algorithms, direct application of research done for this thesis is in effective prediction and design of two-phase components in thermal systems used in different applications. Some of the important internal condensing flow results about sensitivities to boundary fluctuations are also expected to be applicable to flow boiling phenomenon. Novel flow sensitivities discovered through this research, if employed effectively after system level analysis, will result in the development of better control strategies in ground and space based two-phase thermal systems.
Resumo:
We consider an economic order quantity model where the supplier offers an all-units quantity discount and a price sensitive customer demand. We compare a decentralized decision framework where selling price and replenishment policy are determined independently to simultaneous decision making. Constant and dynamic pricing are distinguished. We derive structural properties and develop algorithms that determine the optimal pricing and replenishment policy and show how quantity discounts not only influence the purchasing strategy but also the pricing policy. A sensitivity analysis indicates the impact of the fixed-holding cost ratio, the discount policy, and the customers' price sensitivity on the optimal decisions.
Resumo:
The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by process-based modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM). The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under http://www.flow-r.org) and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws. We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10 m DEM resolution as a good compromise between processing time and quality of results. However, valuable results have still been obtained on the basis of lower quality DEMs with 25 m resolution.
Resumo:
A social Semantic Web empowers its users to have access to collective Web knowledge in a simple manner, and for that reason, controlling online privacy and reputation becomes increasingly important, and must be taken seriously. This chapter presents Fuzzy Cognitive Maps (FCM) as a vehicle for Web knowledge aggregation, representation, and reasoning. With this in mind, a conceptual framework for Web knowledge aggregation, representation, and reasoning is introduced along with a use case, in which the importance of investigative searching for online privacy and reputation is highlighted. Thereby it is demonstrated how a user can establish a positive online presence.
Resumo:
BPAG1a and BPAG1b (BPAG1a/b) constitute two major isoforms encoded by the dystonin (Dst) gene and show homology with MACF1a and MACF1b. These proteins are members of the plakin family, giant multi-modular proteins able to connect the intermediate filament, microtubule and microfilament cytoskeletal networks with each other and to distinct cell membrane sites. They also serve as scaffolds for signaling proteins that modulate cytoskeletal dynamics. To gain better insights into the functions of BPAG1a/b, we further characterized their C-terminal region important for their interaction with microtubules and assessed the role of these isoforms in the cytoskeletal organization of C2.7 myoblast cells. Our results show that alternative splicing does not only occur at the 5' end of Dst and Macf1 pre-mRNAs, as previously reported, but also at their 3' end, resulting in expression of additional four mRNA variants of BPAG1 and MACF1. These isoform-specific C-tails were able to bundle microtubules and bound to both EB1 and EB3, two microtubule plus end proteins. In the C2.7 cell line, knockdown of BPAG1a/b had no major effect on the organization of the microtubule and microfilament networks, but negatively affected endocytosis and maintenance of the Golgi apparatus structure, which became dispersed. Finally, knockdown of BPAG1a/b caused a specific decrease in the directness of cell migration, but did not impair initial cell adhesion. These data provide novel insights into the complexity of alternative splicing of Dst pre-mRNAs and into the role of BPAG1a/b in vesicular transport, Golgi apparatus structure as well as in migration in C2.7 myoblasts.
Resumo:
Plectin, a cytolinker of the plakin family, anchors the intermediate filament (IF) network formed by keratins 5 and 14 (K5/K14) to hemidesmosomes, junctional adhesion complexes in basal keratinocytes. Genetic alterations of these proteins cause epidermolysis bullosa simplex (EBS) characterized by disturbed cytoarchitecture and cell fragility. The mechanisms through which mutations located after the documented plectin IF-binding site, composed of the plakin-repeat domain (PRD) B5 and the linker, as well as mutations in K5 or K14, lead to EBS remain unclear. We investigated the interaction of plectin C terminus, encompassing four domains, the PRD B5, the linker, the PRD C, and the C extremity, with K5/K14 using different approaches, including a rapid and sensitive fluorescent protein-binding assay, based on enhanced green fluorescent protein-tagged proteins (FluoBACE). Our results demonstrate that all four plectin C-terminal domains contribute to its association with K5/K14 and act synergistically to ensure efficient IF binding. The plectin C terminus predominantly interacted with the K5/K14 coil 1 domain and bound more extensively to K5/K14 filaments compared with monomeric keratins or IF assembly intermediates. These findings indicate a multimodular association of plectin with K5/K14 filaments and give insights into the molecular basis of EBS associated with pathogenic mutations in plectin, K5, or K14 genes.Journal of Investigative Dermatology advance online publication, 10 July 2014; doi:10.1038/jid.2014.255.
Resumo:
Increasing antibiotic resistance among uropathogenic Escherichia coli (UPEC) is driving interest in therapeutic targeting of nonconserved virulence factor (VF) genes. The ability to formulate efficacious combinations of antivirulence agents requires an improved understanding of how UPEC deploy these genes. To identify clinically relevant VF combinations, we applied contemporary network analysis and biclustering algorithms to VF profiles from a large, previously characterized inpatient clinical cohort. These mathematical approaches identified four stereotypical VF combinations with distinctive relationships to antibiotic resistance and patient sex that are independent of traditional phylogenetic grouping. Targeting resistance- or sex-associated VFs based upon these contemporary mathematical approaches may facilitate individualized anti-infective therapies and identify synergistic VF combinations in bacterial pathogens.
Resumo:
Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state-of-the-art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real-world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.
Resumo:
With the ongoing shift in the computer graphics industry toward Monte Carlo rendering, there is a need for effective, practical noise-reduction techniques that are applicable to a wide range of rendering effects and easily integrated into existing production pipelines. This course surveys recent advances in image-space adaptive sampling and reconstruction algorithms for noise reduction, which have proven very effective at reducing the computational cost of Monte Carlo techniques in practice. These approaches leverage advanced image-filtering techniques with statistical methods for error estimation. They are attractive because they can be integrated easily into conventional Monte Carlo rendering frameworks, they are applicable to most rendering effects, and their computational overhead is modest.
Resumo:
Indoor positioning has attracted considerable attention for decades due to the increasing demands for location based services. In the past years, although numerous methods have been proposed for indoor positioning, it is still challenging to find a convincing solution that combines high positioning accuracy and ease of deployment. Radio-based indoor positioning has emerged as a dominant method due to its ubiquitousness, especially for WiFi. RSSI (Received Signal Strength Indicator) has been investigated in the area of indoor positioning for decades. However, it is prone to multipath propagation and hence fingerprinting has become the most commonly used method for indoor positioning using RSSI. The drawback of fingerprinting is that it requires intensive labour efforts to calibrate the radio map prior to experiments, which makes the deployment of the positioning system very time consuming. Using time information as another way for radio-based indoor positioning is challenged by time synchronization among anchor nodes and timestamp accuracy. Besides radio-based positioning methods, intensive research has been conducted to make use of inertial sensors for indoor tracking due to the fast developments of smartphones. However, these methods are normally prone to accumulative errors and might not be available for some applications, such as passive positioning. This thesis focuses on network-based indoor positioning and tracking systems, mainly for passive positioning, which does not require the participation of targets in the positioning process. To achieve high positioning accuracy, we work on some information of radio signals from physical-layer processing, such as timestamps and channel information. The contributions in this thesis can be divided into two parts: time-based positioning and channel information based positioning. First, for time-based indoor positioning (especially for narrow-band signals), we address challenges for compensating synchronization offsets among anchor nodes, designing timestamps with high resolution, and developing accurate positioning methods. Second, we work on range-based positioning methods with channel information to passively locate and track WiFi targets. Targeting less efforts for deployment, we work on range-based methods, which require much less calibration efforts than fingerprinting. By designing some novel enhanced methods for both ranging and positioning (including trilateration for stationary targets and particle filter for mobile targets), we are able to locate WiFi targets with high accuracy solely relying on radio signals and our proposed enhanced particle filter significantly outperforms the other commonly used range-based positioning algorithms, e.g., a traditional particle filter, extended Kalman filter and trilateration algorithms. In addition to using radio signals for passive positioning, we propose a second enhanced particle filter for active positioning to fuse inertial sensor and channel information to track indoor targets, which achieves higher tracking accuracy than tracking methods solely relying on either radio signals or inertial sensors.
Resumo:
This dissertation develops and tests a comparative effectiveness methodology utilizing a novel approach to the application of Data Envelopment Analysis (DEA) in health studies. The concept of performance tiers (PerT) is introduced as terminology to express a relative risk class for individuals within a peer group and the PerT calculation is implemented with operations research (DEA) and spatial algorithms. The analysis results in the discrimination of the individual data observations into a relative risk classification by the DEA-PerT methodology. The performance of two distance measures, kNN (k-nearest neighbor) and Mahalanobis, was subsequently tested to classify new entrants into the appropriate tier. The methods were applied to subject data for the 14 year old cohort in the Project HeartBeat! study.^ The concepts presented herein represent a paradigm shift in the potential for public health applications to identify and respond to individual health status. The resultant classification scheme provides descriptive, and potentially prescriptive, guidance to assess and implement treatments and strategies to improve the delivery and performance of health systems. ^
Resumo:
High Angular Resolution Diffusion Imaging (HARDI) techniques, including Diffusion Spectrum Imaging (DSI), have been proposed to resolve crossing and other complex fiber architecture in the human brain white matter. In these methods, directional information of diffusion is inferred from the peaks in the orientation distribution function (ODF). Extensive studies using histology on macaque brain, cat cerebellum, rat hippocampus and optic tracts, and bovine tongue are qualitatively in agreement with the DSI-derived ODFs and tractography. However, there are only two studies in the literature which validated the DSI results using physical phantoms and both these studies were not performed on a clinical MRI scanner. Also, the limited studies which optimized DSI in a clinical setting, did not involve a comparison against physical phantoms. Finally, there is lack of consensus on the necessary pre- and post-processing steps in DSI; and ground truth diffusion fiber phantoms are not yet standardized. Therefore, the aims of this dissertation were to design and construct novel diffusion phantoms, employ post-processing techniques in order to systematically validate and optimize (DSI)-derived fiber ODFs in the crossing regions on a clinical 3T MR scanner, and develop user-friendly software for DSI data reconstruction and analysis. Phantoms with a fixed crossing fiber configuration of two crossing fibers at 90° and 45° respectively along with a phantom with three crossing fibers at 60°, using novel hollow plastic capillaries and novel placeholders, were constructed. T2-weighted MRI results on these phantoms demonstrated high SNR, homogeneous signal, and absence of air bubbles. Also, a technique to deconvolve the response function of an individual peak from the overall ODF was implemented, in addition to other DSI post-processing steps. This technique greatly improved the angular resolution of the otherwise unresolvable peaks in a crossing fiber ODF. The effects of DSI acquisition parameters and SNR on the resultant angular accuracy of DSI on the clinical scanner were studied and quantified using the developed phantoms. With a high angular direction sampling and reasonable levels of SNR, quantification of a crossing region in the 90°, 45° and 60° phantoms resulted in a successful detection of angular information with mean ± SD of 86.93°±2.65°, 44.61°±1.6° and 60.03°±2.21° respectively, while simultaneously enhancing the ODFs in regions containing single fibers. For the applicability of these validated methodologies in DSI, improvement in ODFs and fiber tracking from known crossing fiber regions in normal human subjects were demonstrated; and an in-house software package in MATLAB which streamlines the data reconstruction and post-processing for DSI, with easy to use graphical user interface was developed. In conclusion, the phantoms developed in this dissertation offer a means of providing ground truth for validation of reconstruction and tractography algorithms of various diffusion models (including DSI). Also, the deconvolution methodology (when applied as an additional DSI post-processing step) significantly improved the angular accuracy of the ODFs obtained from DSI, and should be applicable to ODFs obtained from the other high angular resolution diffusion imaging techniques.