935 resultados para Simple overlap model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bacterial virulence can only be assessed by confronting bacteria with a host. Here, we present a new simple assay to evaluate Aeromonas virulence, making use of Dictyostelium amoebae as an alternative host model. This assay can be modulated to assess virulence of very different Aeromonas species.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monte Carlo (code GEANT) produced 6 and 15 MV phase space (PS) data were used to define several simple photon beam models. For creating the PS data the energy of starting electrons hitting the target was tuned to get correct depth dose data compared to measurements. The modeling process used the full PS information within the geometrical boundaries of the beam including all scattered radiation of the accelerator head. Scattered radiation outside the boundaries was neglected. Photons and electrons were assumed to be radiated from point sources. Four different models were investigated which involved different ways to determine the energies and locations of beam particles in the output plane. Depth dose curves, profiles, and relative output factors were calculated with these models for six field sizes from 5x5 to 40x40cm2 and compared to measurements. Model 1 uses a photon energy spectrum independent of location in the PS plane and a constant photon fluence in this plane. Model 2 takes into account the spatial particle fluence distribution in the PS plane. A constant fluence is used again in model 3, but the photon energy spectrum depends upon the off axis position. Model 4, finally uses the spatial particle fluence distribution and off axis dependent photon energy spectra in the PS plane. Depth dose curves and profiles for field sizes up to 10x10cm2 were not model sensitive. Good agreement between measured and calculated depth dose curves and profiles for all field sizes was reached for model 4. However, increasing deviations were found for increasing field sizes for models 1-3. Large deviations resulted for the profiles of models 2 and 3. This is due to the fact that these models overestimate and underestimate the energy fluence at large off axis distances. Relative output factors consistent with measurements resulted only for model 4.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experimental models of tricuspid regurgitation (TR) are needed to study the percutaneous placement of prosthetic atrioventricular valves. The purpose of this study was to develop an appropriate simple and reproducible percutaneous experimental model for creation of tricuspid regurgitation. Tricuspid regurgitation was successfully created through papillary muscle avulsion using a guide-wire loop in seven sheep with regurgitation documented on right ventricular angiograms and a significant increase in heart rate and right atrial pressures. Acute onset of tricuspid regurgitation was poorly tolerated in one animal that died. Autopsy examinations showed avulsion of one papillary muscle in four animals and two papillary muscles in three animals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Turbulence affects traditional free space optical communication by causing speckle to appear in the received beam profile. This occurs due to changes in the refractive index of the atmosphere that are caused by fluctuations in temperature and pressure, resulting in an inhomogeneous medium. The Gaussian-Schell model of partial coherence has been suggested as a means of mitigating these atmospheric inhomogeneities on the transmission side. This dissertation analyzed the Gaussian-Schell model of partial coherence by verifying the Gaussian-Schell model in the far-field, investigated the number of independent phase control screens necessary to approach the ideal Gaussian-Schell model, and showed experimentally that the Gaussian-Schell model of partial coherence is achievable in the far-field using a liquid crystal spatial light modulator. A method for optimizing the statistical properties of the Gaussian-Schell model was developed to maximize the coherence of the field while ensuring that it does not exhibit the same statistics as a fully coherent source. Finally a technique to estimate the minimum spatial resolution necessary in a spatial light modulator was developed to effectively propagate the Gaussian-Schell model through a range of atmospheric turbulence strengths. This work showed that regardless of turbulence strength or receiver aperture, transmitting the Gaussian-Schell model of partial coherence instead of a fully coherent source will yield a reduction in the intensity fluctuations of the received field. By measuring the variance of the intensity fluctuations and the received mean, it is shown through the scintillation index that using the Gaussian-Schell model of partial coherence is a simple and straight forward method to mitigate atmospheric turbulence instead of traditional adaptive optics in free space optical communications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: To analyze computer-assisted diagnostics and virtual implant planning and to evaluate the indication for template-guided flapless surgery and immediate loading in the rehabilitation of the edentulous maxilla. MATERIALS AND METHODS: Forty patients with an edentulous maxilla were selected for this study. The three-dimensional analysis and virtual implant planning was performed with the NobelGuide software program (Nobel Biocare, Göteborg, Sweden). Prior to the computer tomography aesthetics and functional aspects were checked clinically. Either a well-fitting denture or an optimized prosthetic setup was used and then converted to a radiographic template. This allowed for a computer-guided analysis of the jaw together with the prosthesis. Accordingly, the best implant position was determined in relation to the bone structure and prospective tooth position. For all jaws, the hypothetical indication for (1) four implants with a bar overdenture and (2) six implants with a simple fixed prosthesis were planned. The planning of the optimized implant position was then analyzed as follows: the number of implants was calculated that could be placed in sufficient quantity of bone. Additional surgical procedures (guided bone regeneration, sinus floor elevation) that would be necessary due the reduced bone quality and quantity were identified. The indication of template-guided, flapless surgery or an immediate loaded protocol was evaluated. RESULTS: Model (a) - bar overdentures: for 28 patients (70%), all four implants could be placed in sufficient bone (total 112 implants). Thus, a full, flapless procedure could be suggested. For six patients (15%), sufficient bone was not available for any of their planned implants. The remaining six patients had exhibited a combination of sufficient or insufficient bone. Model (b) - simple fixed prosthesis: for 12 patients (30%), all six implants could be placed in sufficient bone (total 72 implants). Thus, a full, flapless procedure could be suggested. For seven patients (17%), sufficient bone was not available for any of their planned implants. The remaining 21 patients had exhibited a combination of sufficient or insufficient bone. DISCUSSION: In the maxilla, advanced atrophy is often observed, and implant placement becomes difficult or impossible. Thus, flapless surgery or an immediate loading protocol can be performed just in a selected number of patients. Nevertheless, the use of a computer program for prosthetically driven implant planning is highly efficient and safe. The three-dimensional view of the maxilla allows the determination of the best implant position, the optimization of the implant axis, and the definition of the best surgical and prosthetic solution for the patient. Thus, a protocol that combines a computer-guided technique with conventional surgical procedures becomes a promising option, which needs to be further evaluated and improved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Many preschool children have wheeze or cough, but only some have asthma later. Existing prediction tools are difficult to apply in clinical practice or exhibit methodological weaknesses. OBJECTIVE We sought to develop a simple and robust tool for predicting asthma at school age in preschool children with wheeze or cough. METHODS From a population-based cohort in Leicestershire, United Kingdom, we included 1- to 3-year-old subjects seeing a doctor for wheeze or cough and assessed the prevalence of asthma 5 years later. We considered only noninvasive predictors that are easy to assess in primary care: demographic and perinatal data, eczema, upper and lower respiratory tract symptoms, and family history of atopy. We developed a model using logistic regression, avoided overfitting with the least absolute shrinkage and selection operator penalty, and then simplified it to a practical tool. We performed internal validation and assessed its predictive performance using the scaled Brier score and the area under the receiver operating characteristic curve. RESULTS Of 1226 symptomatic children with follow-up information, 345 (28%) had asthma 5 years later. The tool consists of 10 predictors yielding a total score between 0 and 15: sex, age, wheeze without colds, wheeze frequency, activity disturbance, shortness of breath, exercise-related and aeroallergen-related wheeze/cough, eczema, and parental history of asthma/bronchitis. The scaled Brier scores for the internally validated model and tool were 0.20 and 0.16, and the areas under the receiver operating characteristic curves were 0.76 and 0.74, respectively. CONCLUSION This tool represents a simple, low-cost, and noninvasive method to predict the risk of later asthma in symptomatic preschool children, which is ready to be tested in other populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by process-based modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM). The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under http://www.flow-r.org) and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws. We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10 m DEM resolution as a good compromise between processing time and quality of results. However, valuable results have still been obtained on the basis of lower quality DEMs with 25 m resolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Primary loss of photoreceptors caused by diseases such as retinitis pigmentosa is one of the main causes of blindness worldwide. To study such diseases, rodent models of N-methyl-N-nitrosourea (MNU)-induced retinal degeneration are widely used. As zebrafish (Danio rerio) are a popular model system for visual research that offers persistent retinal neurogenesis throughout the lifetime and retinal regeneration after severe damage, we have established a novel MNU-induced model in this species. Histology with staining for apoptosis (TUNEL), proliferation (PCNA), activated Müller glial cells (GFAP), rods (rhodopsin) and cones (zpr-1) were performed. A characteristic sequence of retinal changes was found. First, apoptosis of rod photoreceptors occurred 3 days after MNU treatment and resulted in a loss of rod cells. Consequently, proliferation started in the inner nuclear layer (INL) with a maximum at day 8, whereas in the outer nuclear layer (ONL) a maximum was observed at day 15. The proliferation in the ONL persisted to the end of the follow-up (3 months), interestingly, without ongoing rod cell death. We demonstrate that rod degeneration is a sufficient trigger for the induction of Müller glial cell activation, even if only a minimal number of rod cells undergo cell death. In conclusion, the use of MNU is a simple and feasible model for rod photoreceptor degeneration in the zebrafish that offers new insights into rod regeneration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: This study sought to assess the vascular response of overlapping Absorb stents compared with overlapping newer-generation everolimus-eluting metallic platform stents (Xience V [XV]) in a porcine coronary artery model. BACKGROUND: The everolimus-eluting bioresorbable vascular scaffold (Absorb) is a novel approach to treating coronary lesions. A persistent inflammatory response, fibrin deposition, and delayed endothelialization have been reported with overlapping first-generation drug-eluting stents. METHODS: Forty-one overlapping Absorb and overlapping Xience V (XV) devices (3.0 × 12 mm) were implanted in the main coronary arteries of 17 nonatherosclerotic pigs with 10% overstretch. Implanted coronary arteries were evaluated by optical coherence tomography (OCT) at 28 days (Absorb n = 11, XV n = 7) and 90 days (Absorb n = 11, XV n = 8), with immediate histological evaluation following euthanasia at the same time points. One animal from each time point was evaluated with scanning electron microscopy alone. A total of 1,407 cross sections were analyzed by OCT and 148 cross sections analyzed histologically. RESULTS: At 28 days in the overlap, OCT analyses indicated 80.1% of Absorb struts and 99.4% of XV struts to be covered (p < 0.0001), corresponding to histological observations of struts with cellular coverage of 75.4% and 99.6%, respectively (p < 0.001). Uncovered struts were almost exclusively related to the presence of "stacked" Absorb struts, that is, with a direct overlay configuration. At 90 days, overlapping Absorb and overlapping XV struts demonstrated >99% strut coverage by OCT and histology, with no evidence of a significant inflammatory process, and comparable % volume obstructions. CONCLUSIONS: In porcine coronary arteries implanted with overlapping Absorb or overlapping XV struts, strut coverage is delayed at 28 days in overlapping Absorb, dependent on the overlay configuration of the thicker Absorb struts. At 90 days, both overlapping Absorb and overlapping XV have comparable strut coverage. The implications of increased strut thickness may have important clinical and design considerations for bioresorbable platforms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE    Segmentation of the proximal femur in digital antero-posterior (AP) pelvic radiographs is required to create a three-dimensional model of the hip joint for use in planning and treatment. However, manually extracting the femoral contour is tedious and prone to subjective bias, while automatic segmentation must accommodate poor image quality, anatomical structure overlap, and femur deformity. A new method was developed for femur segmentation in AP pelvic radiographs. METHODS    Using manual annotations on 100 AP pelvic radiographs, a statistical shape model (SSM) and a statistical appearance model (SAM) of the femur contour were constructed. The SSM and SAM were used to segment new AP pelvic radiographs with a three-stage approach. At initialization, the mean SSM model is coarsely registered to the femur in the AP radiograph through a scaled rigid registration. Mahalanobis distance defined on the SAM is employed as the search criteria for each annotated suggested landmark location. Dynamic programming was used to eliminate ambiguities. After all landmarks are assigned, a regularized non-rigid registration method deforms the current mean shape of SSM to produce a new segmentation of proximal femur. The second and third stages are iteratively executed to convergence. RESULTS    A set of 100 clinical AP pelvic radiographs (not used for training) were evaluated. The mean segmentation error was [Formula: see text], requiring [Formula: see text] s per case when implemented with Matlab. The influence of the initialization on segmentation results was tested by six clinicians, demonstrating no significance difference. CONCLUSIONS    A fast, robust and accurate method for femur segmentation in digital AP pelvic radiographs was developed by combining SSM and SAM with dynamic programming. This method can be extended to segmentation of other bony structures such as the pelvis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The bedrock topography beneath the Quaternary cover provides an important archive for the identification of erosional processes during past glaciations. Here, we combined stratigraphic investigations of more than 40,000 boreholes with published data to generate a bedrock topography model for the entire plateau north of the Swiss Alps including the valleys within the mountain belt. We compared the bedrock map with data about the pattern of the erosional resistance of Alpine rocks to identify the controls of the lithologic architecture on the location of overdeepenings. We additionally used the bedrock topography map as a basis to calculate the erosional potential of the Alpine glaciers, which was related to the thickness of the LGM ice. We used these calculations to interpret how glaciers, with support by subglacial meltwater under pressure, might have shaped the bedrock topography of the Alps. We found that the erosional resistance of the bedrock lithology mainly explains where overdeepenings in the Alpine valleys and the plateau occur. In particular, in the Alpine valleys, the locations of overdeepenings largely overlap with areas where the underlying bedrock has a low erosional resistance, or where it was shattered by faults. We also found that the assignment of two end-member scenarios of erosion, related to glacial abrasion/plucking in the Alpine valleys, and dissection by subglacial meltwater in the plateau, may be adequate to explain the pattern of overdeepenings in the Alpine realm. This most likely points to the topographic controls on glacial scouring. In the Alps, the flow of LGM and previous glaciers were constrained by valley flanks, while ice flow was mostly divergent on the plateau where valley borders are absent. We suggest that these differences in landscape conditioning might have contributed to the contrasts in the formation of overdeepenings in the Alpine valleys and the plateau.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Corynebacterium diphtheriae is the causative agent of cutaneous and pharyngeal diphtheria in humans. While lethality is certainly caused by diphtheria toxin, corynebacterial colonization may primarily require proteinaceous fibers called pili, which mediate adherence to specific tissues. The type strain of C. diphtheriae possesses three distinct pilus structures, namely the SpaA, SpaD, and SpaH-type pili, which are encoded by three distinct pilus gene clusters. The pilus is assembled onto the bacterial peptidoglycan by a specific transpeptidase enzyme called sortase. Although the SpaA pili are shown to be specific for pharyngeal cells in vitro, little is known about functions of the three pili in bacterial pathogenesis. This is mainly due to lack of in vivo models of corynebacterial infection. As an alternative to mouse models as mice do not have functional receptors for diphtheria toxin, in this study I use Caenorhabditis elegans as a model host for C. diphtheriae. A simple C. elegans model would be beneficial in determining the specific role of each pilus-type and the literature suggests that C. elegans infection model can be used to study a variety of bacterial species giving insight into bacterial virulence and host-pathogen interactions. My study examines the hypothesis that pili and toxin are major virulent determinants of C. diphtheriae in the C. elegans model host.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global wetlands are believed to be climate sensitive, and are the largest natural emitters of methane (CH4). Increased wetland CH4 emissions could act as a positive feedback to future warming. The Wetland and Wetland CH4 Inter-comparison of Models Project (WETCHIMP) investigated our present ability to simulate large-scale wetland characteristics and corresponding CH4 emissions. To ensure inter-comparability, we used a common experimental protocol driving all models with the same climate and carbon dioxide (CO2) forcing datasets. The WETCHIMP experiments were conducted for model equilibrium states as well as transient simulations covering the last century. Sensitivity experiments investigated model response to changes in selected forcing inputs (precipitation, temperature, and atmospheric CO2 concentration). Ten models participated, covering the spectrum from simple to relatively complex, including models tailored either for regional or global simulations. The models also varied in methods to calculate wetland size and location, with some models simulating wetland area prognostically, while other models relied on remotely sensed inundation datasets, or an approach intermediate between the two. Four major conclusions emerged from the project. First, the suite of models demonstrate extensive disagreement in their simulations of wetland areal extent and CH4 emissions, in both space and time. Simple metrics of wetland area, such as the latitudinal gradient, show large variability, principally between models that use inundation dataset information and those that independently determine wetland area. Agreement between the models improves for zonally summed CH4 emissions, but large variation between the models remains. For annual global CH4 emissions, the models vary by ±40% of the all-model mean (190 Tg CH4 yr−1). Second, all models show a strong positive response to increased atmospheric CO2 concentrations (857 ppm) in both CH4 emissions and wetland area. In response to increasing global temperatures (+3.4 °C globally spatially uniform), on average, the models decreased wetland area and CH4 fluxes, primarily in the tropics, but the magnitude and sign of the response varied greatly. Models were least sensitive to increased global precipitation (+3.9 % globally spatially uniform) with a consistent small positive response in CH4 fluxes and wetland area. Results from the 20th century transient simulation show that interactions between climate forcings could have strong non-linear effects. Third, we presently do not have sufficient wetland methane observation datasets adequate to evaluate model fluxes at a spatial scale comparable to model grid cells (commonly 0.5°). This limitation severely restricts our ability to model global wetland CH4 emissions with confidence. Our simulated wetland extents are also difficult to evaluate due to extensive disagreements between wetland mapping and remotely sensed inundation datasets. Fourth, the large range in predicted CH4 emission rates leads to the conclusion that there is both substantial parameter and structural uncertainty in large-scale CH4 emission models, even after uncertainties in wetland areas are accounted for.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complete NotI, SfiI, XbaI and BlnI cleavage maps of Escherichia coli K-12 strain MG1655 were constructed. Techniques used included: CHEF pulsed field gel electrophoresis; transposon mutagenesis; fragment hybridization to the ordered $\lambda$ library of Kohara et al.; fragment and cosmid hybridization to Southern blots; correlation of fragments and cleavage sites with EcoMap, a sequence-modified version of the genomic restriction map of Kohara et al.; and correlation of cleavage sites with DNA sequence databases. In all, 105 restriction sites were mapped and correlated with the EcoMap coordinate system.^ NotI, SfiI, XbaI and BlnI restriction patterns of five commonly used E. coli K-12 strains were compared to those of MG1655. The variability between strains, some of which are separated by numerous steps of mutagenic treatment, is readily detectable by pulsed-field gel electrophoresis. A model is presented to account for the difference between the strains on the basis of simple insertions, deletions, and in one case an inversion. Insertions and deletions ranged in size from 1 kb to 86 kb. Several of the larger features have previously been characterized and some of the smaller rearrangements can potentially account for previously reported genetic features of these strains.^ Some aspects of the frequency and distribution of NotI, SfiI, XbaI and BlnI cleavage sites were analyzed using a method based on Markov chain theory. Overlaps of Dam and Dcm methylase sites with XbaI and SfiI cleavage sites were examined. The one XbaI-Dam overlap in the database is in accord with the expected frequency of this overlap. The occurrence of certain types of SfiI-Dcm overlaps are overrepresented. Of the four subtypes of SfiI-Dcm overlap, only one has a partial inhibitory effect on the activity of SfiI. Recognition sites for all four enzymes are rarer than expected based on oligonucleotide frequency data, with this effect being much stronger for XbaI and BlnI than for NotI and SfiI. The latter two enzyme sites are rare mainly due to apparent negative selection against GGCC (both) and CGGCCG (NotI). The former two enzyme sites are rare mainly due to effects of the VSP repair system on certain di-tri- and tetranucleotides, most notably CTAG. Models are proposed to explain several of the anomalies of oligonucleotide distribution in E. coli, and the biological significance of the systems that produce these anomalies is discussed. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study describes the patterns of occurrence of amyotrophic lateral sclerosis (ALS) and parkinsonism-dementia complex (PDC) of Guam during 1950-1989. Both ALS and PDC occur with high frequency among the indigenous Chamorro population, first recognized in the early 1950's. Reports in the early 1980's indicated that both ALS and PDC were disappearing, due to a purported reduction in exposure to harmful environmental factors as a result of the dramatic changes in lifestyle that took place after World War II. However, this study provides compelling evidence that ALS and PDC have not disappeared on Guam and that rates for both are higher during 1980-1989 than previously reported.^ The patterns of occurrence for both ALS and PDC overlap in most respects: (1) incidence and mortality are decreasing; (2) median age at onset is increasing; (3) males are at increased risk for developing disease; (4) risk is higher for those residing in the south compared to the non-south; and (5) age-specific incidence is decreasing over time except in the oldest age groups.^ Age-specific incidence of ALS and PDC, separately and together, is generally higher for cohorts born before 1920 than for those born after 1920. A significant birth cohort effect on the incidence of PDC for the 1906-1915 birth cohort was found, but not for ALS and for ALS and PDC together. Whether or not a cohort effect, period effect, or both are associated with incidence of ALS and PDC cannot be determined from the data currently available and will require additional follow-up of individuals born after 1920.^ The epidemiological data amassed over this 40-year period provide evidence that supports an environmental exposure model for disease occurrence as opposed to a simple genetic or infectious disease model. Whether neurodegenerative disease in this population occurs as a consequence of a single exposure or is explained by a multifactorial model such as a genetic predisposition with some environmental interaction is yet to be determined. However, descriptive studies such as this can provide clues concerning timing and location of potential adverse exposures but cannot determine etiology, underscoring the urgent need for analytic studies of ALS and PDC to further investigate existing etiologic hypotheses and to test new hypotheses. ^