887 resultados para Level-Set Method
Resumo:
The aim was to propose a strategy for finding reasonable compromises between image noise and dose as a function of patient weight. Weighted CT dose index (CTDI(w)) was measured on a multidetector-row CT unit using CTDI test objects of 16, 24 and 32 cm in diameter at 80, 100, 120 and 140 kV. These test objects were then scanned in helical mode using a wide range of tube currents and voltages with a reconstructed slice thickness of 5 mm. For each set of acquisition parameter image noise was measured and the Rose model observer was used to test two strategies for proposing a reasonable compromise between dose and low-contrast detection performance: (1) the use of a unique noise level for all test object diameters, and (2) the use of a unique dose efficacy level defined as the noise reduction per unit dose. Published data were used to define four weight classes and an acquisition protocol was proposed for each class. The protocols have been applied in clinical routine for more than one year. CTDI(vol) values of 6.7, 9.4, 15.9 and 24.5 mGy were proposed for the following weight classes: 2.5-5, 5-15, 15-30 and 30-50 kg with image noise levels in the range of 10-15 HU. The proposed method allows patient dose and image noise to be controlled in such a way that dose reduction does not impair the detection of low-contrast lesions. The proposed values correspond to high- quality images and can be reduced if only high-contrast organs are assessed.
Resumo:
The present work aims at knowing the faunal composition of drosophilids in forest areas of southern Brazil. Besides, estimation of species richness for this fauna is briefly discussed. The sampling were carried out in three well-preserved areas of the Atlantic Rain Forest in the State of Santa Catarina. In this study, 136,931 specimens were captured and 96.6% of them were identified in the specific level. The observed species richness (153 species) is the largest that has been registered in faunal inventories conducted in Brazil. Sixty-three of the captured species did not fit to the available descriptions, and we believe that most of them are non-described species. The incidence-based estimators tended to give rise to the largest richness estimates while the abundance based give rise to the smallest ones. Such estimators suggest the presence from 172.28 to 220.65 species in the studied area. Based on these values, from 69.35 to 88.81% of the expected species richness were sampled. We suggest that the large richness recorded in this study is a consequence of the large sampling effort, the capture method, recent advances in the taxonomy of drosophilids, the high preservation level and the large extension of the sampled fragment and the high complexity of the Atlantic Rain forest. Finally, our data set suggest that the employment of estimators of richness for drosophilid assemblages is useful but it requires caution.
Resumo:
An analysis is presented of the diversity and faunal turnover of Jurassic ammonites related to transgressive /regressive events. The data set contained 400 genera and 1548 species belonging to 67 ammonite zones covering the entire Jurassic System. These data were used in the construction of faunal turnover curves and ammonite diversities, that correlate with sea-level fluctuation curves. Twenty-four events of ammonite faunal turnover are analyzed throughout the Jurassic. The most important took place at the Sinemurian-Carixian boundary, latest Carixian-Middle Domerian, Domerian-Toarcian boundary, latest Middle Toarcian-Late Toarcian, Toarcian-Aalenian boundary, latest Aalenian-earliest Bajocian, latest Early Bajocian-earliest Late Bojocian, Early Bathonian-Middle Bathonian boundary, latest Middle Bathonian-earliest Late Bathonian, latest Bathonian-Early Callovian, earliest Early Oxfordian-Middle Oxfordian, earliest Late Oxfordian-latest Oxfordian, latest Early Kimmeridgian, Late Kimmeridgian, middle Early Tithonian and Early Tithonian-Late Tithonian boundary. More than 75 percent of these turnovers correlate with regressive-transgressive cycles in the Exxon, and /or Hallam's sea-level curves. Inmost cases the extinction events coincide with regressive intervals, whereas origination and radiation events are related to transgressive cycles. The turnovers frequently coincide with major or minor discontinuities in the Subbetic basin (Betic Cordillera).
Resumo:
The experiential sampling method (ESM) was used to collect data from 74 parttimestudents who described and assessed the risks involved in their current activitieswhen interrupted at random moments by text messages. The major categories ofperceived risk were short-term in nature and involved loss of time or materials relatedto work and physical damage (e.g., from transportation). Using techniques of multilevelanalysis, we demonstrate effects of gender, emotional state, and types of risk onassessments of risk. Specifically, females do not differ from males in assessing thepotential severity of risks but they see these as more likely to occur. Also, participantsassessed risks to be lower when in more positive self-reported emotional states. Wefurther demonstrate the potential of ESM by showing that risk assessments associatedwith current actions exceed those made retrospectively. We conclude by notingadvantages and disadvantages of ESM for collecting data about risk perceptions.
Resumo:
The network choice revenue management problem models customers as choosing from an offer-set, andthe firm decides the best subset to offer at any given moment to maximize expected revenue. The resultingdynamic program for the firm is intractable and approximated by a deterministic linear programcalled the CDLP which has an exponential number of columns. However, under the choice-set paradigmwhen the segment consideration sets overlap, the CDLP is difficult to solve. Column generation has beenproposed but finding an entering column has been shown to be NP-hard. In this paper, starting with aconcave program formulation based on segment-level consideration sets called SDCP, we add a class ofconstraints called product constraints, that project onto subsets of intersections. In addition we proposea natural direct tightening of the SDCP called ?SDCP, and compare the performance of both methodson the benchmark data sets in the literature. Both the product constraints and the ?SDCP method arevery simple and easy to implement and are applicable to the case of overlapping segment considerationsets. In our computational testing on the benchmark data sets in the literature, SDCP with productconstraints achieves the CDLP value at a fraction of the CPU time taken by column generation and webelieve is a very promising approach for quickly approximating CDLP when segment consideration setsoverlap and the consideration sets themselves are relatively small.
Resumo:
Geographical body size variation has long interested evolutionary biologists, and a range of mechanisms have been proposed to explain the observed patterns. It is considered to be more puzzling in ectotherms than in endotherms, and integrative approaches are necessary for testing non-exclusive alternative mechanisms. Using lacertid lizards as a model, we adopted an integrative approach, testing different hypotheses for both sexes while incorporating temporal, spatial, and phylogenetic autocorrelation at the individual level. We used data on the Spanish Sand Racer species group from a field survey to disentangle different sources of body size variation through environmental and individual genetic data, while accounting for temporal and spatial autocorrelation. A variation partitioning method was applied to separate independent and shared components of ecology and phylogeny, and estimated their significance. Then, we fed-back our models by controlling for relevant independent components. The pattern was consistent with the geographical Bergmann's cline and the experimental temperature-size rule: adults were larger at lower temperatures (and/or higher elevations). This result was confirmed with additional multi-year independent data-set derived from the literature. Variation partitioning showed no sex differences in phylogenetic inertia but showed sex differences in the independent component of ecology; primarily due to growth differences. Interestingly, only after controlling for independent components did primary productivity also emerge as an important predictor explaining size variation in both sexes. This study highlights the importance of integrating individual-based genetic information, relevant ecological parameters, and temporal and spatial autocorrelation in sex-specific models to detect potentially important hidden effects. Our individual-based approach devoted to extract and control for independent components was useful to reveal hidden effects linked with alternative non-exclusive hypothesis, such as those of primary productivity. Also, including measurement date allowed disentangling and controlling for short-term temporal autocorrelation reflecting sex-specific growth plasticity.
Resumo:
It is shown how correspondence analysis may be applied to a subset of response categories from a questionnaire survey, for example the subset of undecided responses or the subset of responses for a particular category. The idea is to maintain the original relative frequencies of the categories and not re-express them relative to totals within the subset, as would normally be done in a regular correspondence analysis of the subset. Furthermore, the masses and chi-square metric assigned to the data subset are the same as those in the correspondence analysis of the whole data set. This variant of the method, called Subset Correspondence Analysis, is illustrated on data from the ISSP survey on Family and Changing Gender Roles.
Resumo:
This paper examines the application of the guidelines for evidence-based treatments in family therapy developed by Sexton and collaborators to a set of treatment models. These guidelines classify the models using criteria that take into account the distinctive features of couple and family treatments. A two-step approach was taken: (1) The quality of each of the studies supporting the treatment models was assessed according to a list of ad hoc core criteria; (2) the level of evidence of each treatment model was determined using the guidelines. To reflect the stages of empirical validation present in the literature, nine models were selected: three models each with high, moderate, and low levels of empirical validation, determined by the number of randomized clinical trials (RCTs). The quality ratings highlighted the strengths and limitations of each of the studies that provided evidence backing the treatment models. The classification by level of evidence indicated that four of the models were level III, "evidence-based" treatments; one was a level II, "evidence-informed treatment with promising preliminary evidence-based results"; and four were level I, "evidence-informed" treatments. Using the guidelines helped identify treatments that are solid in terms of not only the number of RCTs but also the quality of the evidence supporting the efficacy of a given treatment. From a research perspective, this analysis highlighted areas to be addressed before some models can move up to a higher level of evidence. From a clinical perspective, the guidelines can help identify the models whose studies have produced clinically relevant results.
Resumo:
In earlier work, the present authors have shown that hardness profiles are less dependent on the level of calculation than energy profiles for potential energy surfaces (PESs) having pathological behaviors. At variance with energy profiles, hardness profiles always show the correct number of stationary points. This characteristic has been used to indicate the existence of spurious stationary points on the PESs. In the present work, we apply this methodology to the hydrogen fluoride dimer, a classical difficult case for the density functional theory methods
Resumo:
In this paper, we propose two active learning algorithms for semiautomatic definition of training samples in remote sensing image classification. Based on predefined heuristics, the classifier ranks the unlabeled pixels and automatically chooses those that are considered the most valuable for its improvement. Once the pixels have been selected, the analyst labels them manually and the process is iterated. Starting with a small and nonoptimal training set, the model itself builds the optimal set of samples which minimizes the classification error. We have applied the proposed algorithms to a variety of remote sensing data, including very high resolution and hyperspectral images, using support vector machines. Experimental results confirm the consistency of the methods. The required number of training samples can be reduced to 10% using the methods proposed, reaching the same level of accuracy as larger data sets. A comparison with a state-of-the-art active learning method, margin sampling, is provided, highlighting advantages of the methods proposed. The effect of spatial resolution and separability of the classes on the quality of the selection of pixels is also discussed.
Resumo:
Aims/hypothesis We assessed systemic and local muscle fuel metabolism during aerobic exercise in patients with type I diabetes at euglycaemia and hyperglycaemia with identical insulin levels.Methods This was a single-blinded randomised crossover study at a university diabetes unit in Switzerland. We studied seven physically active men with type I diabetes (mean +/- SEM age 33.5 +/- 2.4 years, diabetes duration 20.1 +/- 3.6 years, HbA(1c) 6.7 +/- 0.2% and peak oxygen uptake [VO2peak] 50.3 +/- 4.5 ml min(-1) kg(-1)). Men were studied twice while cycling for 120 min at 55 to 60% of VO2peak, with a blood glucose level randomly set either at 5 or 11 mmol/l and identical insulinaemia. The participants were blinded to the glycaemic level; allocation concealment was by opaque, sealed envelopes. Magnetic resonance spectroscopy was used to quantify intramyocellular glycogen and lipids before and after exercise. Indirect calorimetry and measurement of stable isotopes and counter-regulatory hormones complemented the assessment of local and systemic fuel metabolism.Results The contribution of lipid oxidation to overall energy metabolism was higher in euglycaemia than in hyperglycaemia (49.4 +/- 4.8 vs 30.6 +/- 4.2%; p<0.05). Carbohydrate oxidation accounted for 48.2 +/- 4.7 and 66.6 +/- 4.2% of total energy expenditure in euglycaemia and hyperglycaemia, respectively (p<0.05). The level of intramyocellular glycogen before exercise was higher in hyperglycaemia than in euglycaemia (3.4 +/- 0.3 vs 2.7 +/- 0.2 arbitrary units [AU]; p<0.05). Absolute glycogen consumption tended to be higher in hyperglycaemia than in euglycaemia (1.3 +/- 0.3 vs 0.9 +/- 0.1 AU). Cortisol and growth hormone increased more strongly in euglycaemia than in hyperglycaemia (levels at the end of exercise 634 52 vs 501 +/- 32 nmol/l and 15.5 +/- 4.5 vs 7.4 +/- 2.0 ng/ml, respectively; p<0.05).Conclusions/interpretation Substrate oxidation in type I diabetic patients performing aerobic exercise in euglycaemia is similar to that in healthy individuals revealing a shift towards lipid oxidation during exercise. In hyperglycaemia fuel metabolism in these patients is dominated by carbohydrate oxidation. Intramyocellular glycogen was not spared in hyperglycaemia.
Resumo:
Résumé La protéomique basée sur la spectrométrie de masse est l'étude du proteome l'ensemble des protéines exprimées au sein d'une cellule, d'un tissu ou d'un organisme - par cette technique. Les protéines sont coupées à l'aide d'enzymes en plus petits morceaux -les peptides -, et, séparées par différentes techniques. Les différentes fractions contenant quelques centaines de peptides sont ensuite analysées dans un spectromètre de masse. La masse des peptides est enregistrée et chaque peptide est séquentiellement fragmenté pour en obtenir sa séquence. L'information de masse et séquence est ensuite comparée à une base de données de protéines afin d'identifier la protéine d'origine. Dans une première partie, la thèse décrit le développement de méthodes d'identification. Elle montre l'importance de l'enrichissement de protéines comme moyen d'accès à des protéines de moyenne à faible abondance dans le lait humain. Elle utilise des injections répétées pour augmenter la couverture en protéines et la confiance dans l'identification. L'impacte de nouvelle version de base de données sur la liste des protéines identifiées est aussi démontré. De plus, elle utilise avec succès la spectrométrie de masse comme alternative aux anticorps, pour valider la présence de 34 constructions de protéines pathogéniques du staphylocoque doré exprimées dans une souche de lactocoque. Dans une deuxième partie, la thèse décrit le développement de méthodes de quantification. Elle expose de nouvelles approches de marquage des terminus des protéines aux isotopes stables et décrit la première méthode de marquage des groupements carboxyliques au niveau protéine à l'aide de réactifs composé de carbone 13. De plus, une nouvelle méthode, appelée ANIBAL, marquant tous les groupements amines et carboxyliques au niveau de la protéine, est exposée. Summary Mass spectrometry-based proteomics is the study of the proteome -the set of all expressed proteins in a cell, tissue or organism -using mass spectrometry. Proteins are cut into smaller pieces - peptides - using proteolytic enzymes and separated using different separation techniques. The different fractions containing several hundreds of peptides are than analyzed by mass spectrometry. The mass of the peptides entering the instrument are recorded and each peptide is sequentially fragmented to obtain its amino acid sequence. Each peptide sequence with its corresponding mass is then searched against a protein database to identify the protein to which it belongs. This thesis presents new method developments in this field. In a first part, the thesis describes development of identification methods. It shows the importance of protein enrichment methods to gain access to medium-to-low abundant proteins in a human milk sample. It uses repeated injection to increase protein coverage and confidence in identification and demonstrates the impact of new database releases on protein identification lists. In addition, it successfully uses mass spectrometry as an alternative to antibody-based assays to validate the presence of 34 different recombinant constructs of Staphylococcus aureus pathogenic proteins expressed in a Lactococcus lactis strain. In a second part, development of quantification methods is described. It shows new stable isotope labeling approaches based on N- and C-terminus labeling of proteins and describes the first method of labeling of carboxylic groups at the protein level using 13C stable isotopes. In addition, a new quantitative approach called ANIBAL is explained that labels all amino and carboxylic groups at the protein level.
Resumo:
Monitoring thunderstorms activity is an essential part of operational weather surveillance given their potential hazards, including lightning, hail, heavy rainfall, strong winds or even tornadoes. This study has two main objectives: firstly, the description of a methodology, based on radar and total lightning data to characterise thunderstorms in real-time; secondly, the application of this methodology to 66 thunderstorms that affected Catalonia (NE Spain) in the summer of 2006. An object-oriented tracking procedure is employed, where different observation data types generate four different types of objects (radar 1-km CAPPI reflectivity composites, radar reflectivity volumetric data, cloud-to-ground lightning data and intra-cloud lightning data). In the framework proposed, these objects are the building blocks of a higher level object, the thunderstorm. The methodology is demonstrated with a dataset of thunderstorms whose main characteristics, along the complete life cycle of the convective structures (development, maturity and dissipation), are described statistically. The development and dissipation stages present similar durations in most cases examined. On the contrary, the duration of the maturity phase is much more variable and related to the thunderstorm intensity, defined here in terms of lightning flash rate. Most of the activity of IC and CG flashes is registered in the maturity stage. In the development stage little CG flashes are observed (2% to 5%), while for the dissipation phase is possible to observe a few more CG flashes (10% to 15%). Additionally, a selection of thunderstorms is used to examine general life cycle patterns, obtained from the analysis of normalized (with respect to thunderstorm total duration and maximum value of variables considered) thunderstorm parameters. Among other findings, the study indicates that the normalized duration of the three stages of thunderstorm life cycle is similar in most thunderstorms, with the longest duration corresponding to the maturity stage (approximately 80% of the total time).
Resumo:
The purpose of this study was to evaluate a new method of measuring rolling resistance in treadmill cycling and to establish its sensitivity and reproducibility. One participant was asked to keep a bicycle in equilibrium on a treadmill without pedalling at a constant speed of 5.56 m x s(-1), which was held in place in the front by a dynamometer. For each condition, the method consisted of 11 measurements of the force required to hold the cycle at different treadmill slopes (0-10%, increment 1%). The coefficient of rolling resistance was calculated based on the forces applied to the bicycle in equilibrium. To test the sensitivity of the method, the bicycle was successively equipped with three tyre types (700 x 28, 700 x 23, 700 x 22) and inflation pressure was set at 150, 300, 600, 900, and 1100 kPa. To test the reproducibility of the method, a second experimenter repeated all measurements done with the 700 x 23 tyres. The method was sensitive enough to detect an effect of both tyre type and inflation pressure (P < 0.001: two-way ANOVA). The measurement of the coefficient of rolling resistance by two separate experimenters resulted in a small bias of 0.00029 (95% CI, -0.00011 to 0.00068). In conclusion, the new method is sensitive and reliable, as well as being simple and affordable.
Resumo:
BACKGROUND: Extensive research exists estimating the effect hazardous alcohol¦use on morbidity and mortality, but little research quantifies the association between¦alcohol consumption and utility scores in patients with alcohol dependence.¦In the context of comparative research, the World Health Organisation (WHO)¦proposed to categorise the risk for alcohol-related acute and chronic harm according¦to patients' average daily alcohol consumption. OBJECTIVES: To estimate utility¦scores associated with each category of the WHO drinking risk-level classification¦in patients with alcohol dependence (AD). METHODS: We used data from¦CONTROL, an observational cohort study including 143 AD patients from the Alcohol¦Treatment Center at Lausanne University Hospital, followed for 12 months.¦Average daily alcohol consumption was assessed monthly using the Timeline Follow-¦back method and patients were categorised according to the WHO drinking¦risk-level classification: abstinent, low, medium, high and very high. Other measures¦as sociodemographic characteristics and utility scores derived from the EuroQoL¦5-Dimensions questionnaire (EQ-5D) were collected every three months.¦Mixed models for repeated measures were used to estimate mean utility scores¦associated with WHO drinking risk-level categories. RESULTS: A total of 143 patients¦were included and the 12-month follow-up permitting the assessment of¦1318 person-months. At baseline the mean age of the patients was 44.6 (SD 11.8)¦and the majority of patients was male (63.6%). Using repeated measures analysis,¦utility scores decreased with increasing drinking levels, ranging from 0.80 in abstinent¦patients to 0.62 in patients with very high risk drinking level (p_0.0001).¦CONCLUSIONS: In this sample of patients with alcohol dependence undergoing¦specialized care, utility scores estimated from the EQ-5D appeared to substantially¦and consistently vary according to patients' WHO drinking level.