26 resultados para Method of moments algorithm


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In principle, the world and life itself are the contexts of theatrical events. The term context is broad and thus seems hardly usable. It only makes sense to use the term when terminologies and methodologies determine which parts of their contexts are to be incorporated and analysed for which theatrical event. This presentation exemplifies a method which is particularly suitable for sensibly selecting the most important contexts for research in theatre history. The complexity of the representation increases continuously from The Presentation of Self in Everyday Life to Brecht’s “Street Scene” and “Everyday Theatre”, portrayals of rulers in feasts and parades to Hamlet productions by the Royal Shakespeare Company or a Wagner opera in Bayreuth. The different forms of theatre thus constitute a continuum which spans from “everyday theatre” to “art theatre”. The representation of the world in this continuum is sometimes questioned by the means of theatre, for example when Commedia dell’arte takes a critical stance towards the representative theatre of the humanists or when playful devices such as reversal, parody and fragmentation challenge the representative character of productions, which is applied by the Vice character for instance. There is a second component that has an impact on the continuum without a theatrical device: attitude, opinion, norms and bans which originate from society. As excerpts of contexts, they refer to single forms of theatre in the continuum. This results in a complex system of four components which evolves from the panorama between the antipodes “everyday theatre” and “art theatre” as well as both spheres of influence of which only one uses theatrical devices. All components interact in a specific time frame in a specific place in a specific way in each case, which can then be described as the theatricality in this time frame. This presentation will deal with what the concept is capable of doing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We develop statistical procedures for estimating shape and orientation of arbitrary three-dimensional particles. We focus on the case where particles cannot be observed directly, but only via sections. Volume tensors are used for describing particle shape and orientation, and we derive stereological estimators of the tensors. These estimators are combined to provide consistent estimators of the moments of the so-called particle cover density. The covariance structure associated with the particle cover density depends on the orientation and shape of the particles. For instance, if the distribution of the typical particle is invariant under rotations, then the covariance matrix is proportional to the identity matrix. We develop a non-parametric test for such isotropy. A flexible Lévy-based particle model is proposed, which may be analysed using a generalized method of moments in which the volume tensors enter. The developed methods are used to study the cell organization in the human brain cortex.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite numerous studies about nitrogen-cycling in forest ecosystems, many uncertainties remain, especially regarding the longer-term nitrogen accumulation. To contribute to filling this gap, the dynamic process-based model TRACE, with the ability to simulate 15N tracer redistribution in forest ecosystems was used to study N cycling processes in a mountain spruce forest of the northern edge of the Alps in Switzerland (Alptal, SZ). Most modeling analyses of N-cycling and C-N interactions have very limited ability to determine whether the process interactions are captured correctly. Because the interactions in such a system are complex, it is possible to get the whole-system C and N cycling right in a model without really knowing if the way the model combines fine-scale interactions to derive whole-system cycling is correct. With the possibility to simulate 15N tracer redistribution in ecosystem compartments, TRACE features a very powerful tool for the validation of fine-scale processes captured by the model. We first adapted the model to the new site (Alptal, Switzerland; long-term low-dose N-amendment experiment) by including a new algorithm for preferential water flow and by parameterizing of differences in drivers such as climate, N deposition and initial site conditions. After the calibration of key rates such as NPP and SOM turnover, we simulated patterns of 15N redistribution to compare against 15N field observations from a large-scale labeling experiment. The comparison of 15N field data with the modeled redistribution of the tracer in the soil horizons and vegetation compartments shows that the majority of fine-scale processes are captured satisfactorily. Particularly, the model is able to reproduce the fact that the largest part of the N deposition is immobilized in the soil. The discrepancies of 15N recovery in the LF and M soil horizon can be explained by the application method of the tracer and by the retention of the applied tracer by the well developed moss layer, which is not considered in the model. Discrepancies in the dynamics of foliage and litterfall 15N recovery were also observed and are related to the longevity of the needles in our mountain forest. As a next step, we will use the final Alptal version of the model to calculate the effects of climate change (temperature, CO2) and N deposition on ecosystem C sequestration in this regionally representative Norway spruce (Picea abies) stand.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Serologic testing algorithms for recent HIV seroconversion (STARHS) provide important information for HIV surveillance. We have previously demonstrated that a patient's antibody reaction pattern in a confirmatory line immunoassay (INNO-LIA™ HIV I/II Score) provides information on the duration of infection, which is unaffected by clinical, immunological and viral variables. In this report we have set out to determine the diagnostic performance of Inno-Lia algorithms for identifying incident infections in patients with known duration of infection and evaluated the algorithms in annual cohorts of HIV notifications. Methods Diagnostic sensitivity was determined in 527 treatment-naive patients infected for up to 12 months. Specificity was determined in 740 patients infected for longer than 12 months. Plasma was tested by Inno-Lia and classified as either incident (< = 12 m) or older infection by 26 different algorithms. Incident infection rates (IIR) were calculated based on diagnostic sensitivity and specificity of each algorithm and the rule that the total of incident results is the sum of true-incident and false-incident results, which can be calculated by means of the pre-determined sensitivity and specificity. Results The 10 best algorithms had a mean raw sensitivity of 59.4% and a mean specificity of 95.1%. Adjustment for overrepresentation of patients in the first quarter year of infection further reduced the sensitivity. In the preferred model, the mean adjusted sensitivity was 37.4%. Application of the 10 best algorithms to four annual cohorts of HIV-1 notifications totalling 2'595 patients yielded a mean IIR of 0.35 in 2005/6 (baseline) and of 0.45, 0.42 and 0.35 in 2008, 2009 and 2010, respectively. The increase between baseline and 2008 and the ensuing decreases were highly significant. Other adjustment models yielded different absolute IIR, although the relative changes between the cohorts were identical for all models. Conclusions The method can be used for comparing IIR in annual cohorts of HIV notifications. The use of several different algorithms in combination, each with its own sensitivity and specificity to detect incident infection, is advisable as this reduces the impact of individual imperfections stemming primarily from relatively low sensitivities and sampling bias.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today electronic portal imaging devices (EPID's) are used primarily to verify patient positioning. They have, however, also the potential as 2D-dosimeters and could be used as such for transit dosimetry or dose reconstruction. It has been proven that such devices, especially liquid filled ionization chambers, have a stable dose response relationship which can be described in terms of the physical properties of the EPID and the pulsed linac radiation. For absolute dosimetry however, an accurate method of calibration to an absolute dose is needed. In this work, we concentrate on calibration against dose in a homogeneous water phantom. Using a Monte Carlo model of the detector we calculated dose spread kernels in units of absolute dose per incident energy fluence and compared them to calculated dose spread kernels in water at different depths. The energy of the incident pencil beams varied between 0.5 and 18 MeV. At the depth of dose maximum in water for a 6 MV beam (1.5 cm) and for a 18 MV beam (3.0 cm) we observed large absolute differences between water and detector dose above an incident energy of 4 MeV but only small relative differences in the most frequent energy range of the beam energy spectra. It is shown that for a 6 MV beam the absolute reference dose measured at 1.5 cm water depth differs from the absolute detector dose by 3.8%. At depth 1.2 cm in water, however, the relative dose differences are almost constant between 2 and 6 MeV. The effects of changes in the energy spectrum of the beam on the dose responses in water and in the detector are also investigated. We show that differences larger than 2% can occur for different beam qualities of the incident photon beam behind water slabs of different thicknesses. It is therefore concluded that for high-precision dosimetry such effects have to be taken into account. Nevertheless, the precise information about the dose response of the detector provided in this Monte Carlo study forms the basis of extracting directly the basic radiometric quantities photon fluence and photon energy fluence from the detector's signal using a deconvolution algorithm. The results are therefore promising for future application in absolute transit dosimetry and absolute dose reconstruction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud Computing enables provisioning and distribution of highly scalable services in a reliable, on-demand and sustainable manner. However, objectives of managing enterprise distributed applications in cloud environments under Service Level Agreement (SLA) constraints lead to challenges for maintaining optimal resource control. Furthermore, conflicting objectives in management of cloud infrastructure and distributed applications might lead to violations of SLAs and inefficient use of hardware and software resources. This dissertation focusses on how SLAs can be used as an input to the cloud management system, increasing the efficiency of allocating resources, as well as that of infrastructure scaling. First, we present an extended SLA semantic model for modelling complex service-dependencies in distributed applications, and for enabling automated cloud infrastructure management operations. Second, we describe a multi-objective VM allocation algorithm for optimised resource allocation in infrastructure clouds. Third, we describe a method of discovering relations between the performance indicators of services belonging to distributed applications and then using these relations for building scaling rules that a CMS can use for automated management of VMs. Fourth, we introduce two novel VM-scaling algorithms, which optimally scale systems composed of VMs, based on given SLA performance constraints. All presented research works were implemented and tested using enterprise distributed applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The effect of a traditional Ethiopian lupin processing method on the chemical composition of lupin seed samples was studied. Two sampling districts, namely Mecha and Sekela, representing the mid- and high-altitude areas of north-western Ethiopia, respectively, were randomly selected. Different types of traditionally processed and marketed lupin seed samples (raw, roasted, and fi nished) were collected in six replications from each district. Raw samples are unprocessed, and roasted samples are roasted using fi rewood. Finished samples are those ready for human consumption as snack. Thousand seed weight for raw and roasted samples within a study district was similar (P > 0.05), but it was lower (P < 0.01) for fi nished samples compared to raw and roasted samples. The crude fi bre content of fi nished lupin seed sample from Mecha was lower (P < 0.01) than that of raw and roasted samples. However, the different lupin samples from Sekela had similar crude fi bre content (P > 0.05). The crude protein and crude fat contents of fi nished samples within a study district were higher (P < 0.01) than those of raw and roasted samples, respectively. Roasting had no effect on the crude protein content of lupin seed samples. The crude ash content of raw and roasted lupin samples within a study district was higher (P < 0.01) than that of fi nished lupin samples of the respective study districts. The content of quinolizidine alkaloids of fi nished lupin samples was lower than that of raw and roasted samples. There was also an interaction effect between location and lupin sample type. The traditional processing method of lupin seeds in Ethiopia has a positive contribution improving the crude protein and crude fat content, and lowering the alkaloid content of the fi nished product. The study showed the possibility of adopting the traditional processing method to process bitter white lupin for the use as protein supplement in livestock feed in Ethiopia, but further work has to be done on the processing method and animal evaluation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose Ophthalmologists are confronted with a set of different image modalities to diagnose eye tumors e.g., fundus photography, CT and MRI. However, these images are often complementary and represent pathologies differently. Some aspects of tumors can only be seen in a particular modality. A fusion of modalities would improve the contextual information for diagnosis. The presented work attempts to register color fundus photography with MRI volumes. This would complement the low resolution 3D information in the MRI with high resolution 2D fundus images. Methods MRI volumes were acquired from 12 infants under the age of 5 with unilateral retinoblastoma. The contrast-enhanced T1-FLAIR sequence was performed with an isotropic resolution of less than 0.5mm. Fundus images were acquired with a RetCam camera. For healthy eyes, two landmarks were used: the optic disk and the fovea. The eyes were detected and extracted from the MRI volume using a 3D adaption of the Fast Radial Symmetry Transform (FRST). The cropped volume was automatically segmented using the Split Bregman algorithm. The optic nerve was enhanced by a Frangi vessel filter. By intersection the nerve with the retina the optic disk was found. The fovea position was estimated by constraining the position with the angle between the optic and the visual axis as well as the distance from the optic disk. The optical axis was detected automatically by fitting a parable on to the lens surface. On the fundus, the optic disk and the fovea were detected by using the method of Budai et al. Finally, the image was projected on to the segmented surface using the lens position as the camera center. In tumor affected eyes, the manually segmented tumors were used instead of the optic disk and macula for the registration. Results In all of the 12 MRI volumes that were tested the 24 eyes were found correctly, including healthy and pathological cases. In healthy eyes the optic nerve head was found in all of the tested eyes with an error of 1.08 +/- 0.37mm. A successful registration can be seen in figure 1. Conclusions The presented method is a step toward automatic fusion of modalities in ophthalmology. The combination enhances the MRI volume with higher resolution from the color fundus on the retina. Tumor treatment planning is improved by avoiding critical structures and disease progression monitoring is made easier.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Point Distribution Models (PDM) are among the most popular shape description techniques and their usefulness has been demonstrated in a wide variety of medical imaging applications. However, to adequately characterize the underlying modeled population it is essential to have a representative number of training samples, which is not always possible. This problem is especially relevant as the complexity of the modeled structure increases, being the modeling of ensembles of multiple 3D organs one of the most challenging cases. In this paper, we introduce a new GEneralized Multi-resolution PDM (GEM-PDM) in the context of multi-organ analysis able to efficiently characterize the different inter-object relations, as well as the particular locality of each object separately. Importantly, unlike previous approaches, the configuration of the algorithm is automated thanks to a new agglomerative landmark clustering method proposed here, which equally allows us to identify smaller anatomically significant regions within organs. The significant advantage of the GEM-PDM method over two previous approaches (PDM and hierarchical PDM) in terms of shape modeling accuracy and robustness to noise, has been successfully verified for two different databases of sets of multiple organs: six subcortical brain structures, and seven abdominal organs. Finally, we propose the integration of the new shape modeling framework into an active shape-model-based segmentation algorithm. The resulting algorithm, named GEMA, provides a better overall performance than the two classical approaches tested, ASM, and hierarchical ASM, when applied to the segmentation of 3D brain MRI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Behavior is one of the most important indicators for assessing cattle health and well-being. The objective of this study was to develop and validate a novel algorithm to monitor locomotor behavior of loose-housed dairy cows based on the output of the RumiWatch pedometer (ITIN+HOCH GmbH, Fütterungstechnik, Liestal, Switzerland). Data of locomotion were acquired by simultaneous pedometer measurements at a sampling rate of 10 Hz and video recordings for manual observation later. The study consisted of 3 independent experiments. Experiment 1 was carried out to develop and validate the algorithm for lying behavior, experiment 2 for walking and standing behavior, and experiment 3 for stride duration and stride length. The final version was validated, using the raw data, collected from cows not included in the development of the algorithm. Spearman correlation coefficients were calculated between accelerometer variables and respective data derived from the video recordings (gold standard). Dichotomous data were expressed as the proportion of correctly detected events, and the overall difference for continuous data was expressed as the relative measurement error. The proportions for correctly detected events or bouts were 1 for stand ups, lie downs, standing bouts, and lying bouts and 0.99 for walking bouts. The relative measurement error and Spearman correlation coefficient for lying time were 0.09% and 1; for standing time, 4.7% and 0.96; for walking time, 17.12% and 0.96; for number of strides, 6.23% and 0.98; for stride duration, 6.65% and 0.75; and for stride length, 11.92% and 0.81, respectively. The strong to very high correlations of the variables between visual observation and converted pedometer data indicate that the novel RumiWatch algorithm may markedly improve automated livestock management systems for efficient health monitoring of dairy cows.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Asynchronous level crossing sampling analog-to-digital converters (ADCs) are known to be more energy efficient and produce fewer samples than their equidistantly sampling counterparts. However, as the required threshold voltage is lowered, the number of samples and, in turn, the data rate and the energy consumed by the overall system increases. In this paper, we present a cubic Hermitian vector-based technique for online compression of asynchronously sampled electrocardiogram signals. The proposed method is computationally efficient data compression. The algorithm has complexity O(n), thus well suited for asynchronous ADCs. Our algorithm requires no data buffering, maintaining the energy advantage of asynchronous ADCs. The proposed method of compression has a compression ratio of up to 90% with achievable percentage root-mean-square difference ratios as a low as 0.97. The algorithm preserves the superior feature-to-feature timing accuracy of asynchronously sampled signals. These advantages are achieved in a computationally efficient manner since algorithm boundary parameters for the signals are extracted a priori.